Test Report: KVM_Linux_crio 19265

                    
                      4b25178fc7513411450a4d543cff32ee34a2d14b:2024-07-17:35370
                    
                

Test fail (31/326)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.01
41 TestAddons/parallel/MetricsServer 355.23
54 TestAddons/StoppedEnableDisable 154.31
157 TestFunctional/parallel/ImageCommands/ImageRemove 2.63
173 TestMultiControlPlane/serial/StopSecondaryNode 141.76
175 TestMultiControlPlane/serial/RestartSecondaryNode 58.04
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 798.13
178 TestMultiControlPlane/serial/DeleteSecondaryNode 13.55
180 TestMultiControlPlane/serial/StopCluster 173.14
240 TestMultiNode/serial/RestartKeepsNodes 322.27
242 TestMultiNode/serial/StopMultiNode 141.3
249 TestPreload 163.11
257 TestKubernetesUpgrade 414.74
270 TestStartStop/group/old-k8s-version/serial/FirstStart 318.37
300 TestStartStop/group/old-k8s-version/serial/DeployApp 0.55
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 99.42
307 TestStartStop/group/old-k8s-version/serial/SecondStart 522.76
312 TestStartStop/group/embed-certs/serial/Stop 138.99
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.01
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.52
325 TestStartStop/group/no-preload/serial/Stop 138.95
326 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.25
329 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.29
330 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 312.38
331 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.43
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 436.64
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 473.62
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 425.84
x
+
TestAddons/parallel/Ingress (154.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-860537 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-860537 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-860537 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [19a96ab4-cd55-4419-b5a7-8b9e8823879f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [19a96ab4-cd55-4419-b5a7-8b9e8823879f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004171923s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
2024/07/17 00:07:59 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:59 [DEBUG] GET http://192.168.39.251:5000: retrying in 2s (3 left)
2024/07/17 00:08:01 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:01 [DEBUG] GET http://192.168.39.251:5000: retrying in 4s (2 left)
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-860537 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.063919864s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-860537 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.251
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-860537 addons disable ingress-dns --alsologtostderr -v=1: (1.311385594s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-860537 addons disable ingress --alsologtostderr -v=1: (7.687756372s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-860537 -n addons-860537
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-860537 logs -n 25: (1.259923149s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-375038                                                                     | download-only-375038 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-407804                                                                     | download-only-407804 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-020346                                                                     | download-only-020346 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-375038                                                                     | download-only-375038 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-998982 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | binary-mirror-998982                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46519                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-998982                                                                     | binary-mirror-998982 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| addons  | disable dashboard -p                                                                        | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | addons-860537                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | addons-860537                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-860537 --wait=true                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | -p addons-860537                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-860537 ssh cat                                                                       | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | /opt/local-path-provisioner/pvc-52a7cdd9-a848-453e-a1d0-34493d73230f_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | -p addons-860537                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-860537 ip                                                                            | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | addons-860537                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | addons-860537                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-860537 ssh curl -s                                                                   | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-860537 addons                                                                        | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-860537 addons                                                                        | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-860537 ip                                                                            | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:53.893456   20973 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:53.893708   20973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:53.893717   20973 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:53.893721   20973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:53.893902   20973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:04:53.894494   20973 out.go:298] Setting JSON to false
	I0717 00:04:53.895276   20973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2843,"bootTime":1721171851,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:53.895336   20973 start.go:139] virtualization: kvm guest
	I0717 00:04:53.897223   20973 out.go:177] * [addons-860537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:04:53.898526   20973 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:04:53.898529   20973 notify.go:220] Checking for updates...
	I0717 00:04:53.901049   20973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:53.902282   20973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:04:53.903540   20973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:53.904749   20973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:04:53.905896   20973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:04:53.907223   20973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:53.940046   20973 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 00:04:53.941354   20973 start.go:297] selected driver: kvm2
	I0717 00:04:53.941369   20973 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:04:53.941383   20973 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:04:53.942339   20973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:04:53.942424   20973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:04:53.957687   20973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:04:53.957770   20973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:53.958146   20973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:04:53.958183   20973 cni.go:84] Creating CNI manager for ""
	I0717 00:04:53.958195   20973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:04:53.958211   20973 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:53.958290   20973 start.go:340] cluster config:
	{Name:addons-860537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:53.958434   20973 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:04:53.960206   20973 out.go:177] * Starting "addons-860537" primary control-plane node in "addons-860537" cluster
	I0717 00:04:53.961486   20973 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:53.961525   20973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:53.961531   20973 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:53.961607   20973 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:04:53.961617   20973 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:04:53.961938   20973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/config.json ...
	I0717 00:04:53.961958   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/config.json: {Name:mke28f9d9ed27413202277398c0d4001e090b138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:04:53.962084   20973 start.go:360] acquireMachinesLock for addons-860537: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:04:53.962129   20973 start.go:364] duration metric: took 31.046µs to acquireMachinesLock for "addons-860537"
	I0717 00:04:53.962146   20973 start.go:93] Provisioning new machine with config: &{Name:addons-860537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:04:53.962203   20973 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 00:04:53.963941   20973 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 00:04:53.964080   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:04:53.964126   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:04:53.978544   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0717 00:04:53.979062   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:04:53.979594   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:04:53.979623   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:04:53.979964   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:04:53.980141   20973 main.go:141] libmachine: (addons-860537) Calling .GetMachineName
	I0717 00:04:53.980302   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:04:53.980449   20973 start.go:159] libmachine.API.Create for "addons-860537" (driver="kvm2")
	I0717 00:04:53.980473   20973 client.go:168] LocalClient.Create starting
	I0717 00:04:53.980507   20973 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 00:04:54.396858   20973 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 00:04:54.447426   20973 main.go:141] libmachine: Running pre-create checks...
	I0717 00:04:54.447452   20973 main.go:141] libmachine: (addons-860537) Calling .PreCreateCheck
	I0717 00:04:54.447997   20973 main.go:141] libmachine: (addons-860537) Calling .GetConfigRaw
	I0717 00:04:54.448590   20973 main.go:141] libmachine: Creating machine...
	I0717 00:04:54.448608   20973 main.go:141] libmachine: (addons-860537) Calling .Create
	I0717 00:04:54.448761   20973 main.go:141] libmachine: (addons-860537) Creating KVM machine...
	I0717 00:04:54.450023   20973 main.go:141] libmachine: (addons-860537) DBG | found existing default KVM network
	I0717 00:04:54.450780   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:54.450642   20994 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0717 00:04:54.450804   20973 main.go:141] libmachine: (addons-860537) DBG | created network xml: 
	I0717 00:04:54.450816   20973 main.go:141] libmachine: (addons-860537) DBG | <network>
	I0717 00:04:54.450825   20973 main.go:141] libmachine: (addons-860537) DBG |   <name>mk-addons-860537</name>
	I0717 00:04:54.450831   20973 main.go:141] libmachine: (addons-860537) DBG |   <dns enable='no'/>
	I0717 00:04:54.450835   20973 main.go:141] libmachine: (addons-860537) DBG |   
	I0717 00:04:54.450843   20973 main.go:141] libmachine: (addons-860537) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 00:04:54.450848   20973 main.go:141] libmachine: (addons-860537) DBG |     <dhcp>
	I0717 00:04:54.450854   20973 main.go:141] libmachine: (addons-860537) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 00:04:54.450859   20973 main.go:141] libmachine: (addons-860537) DBG |     </dhcp>
	I0717 00:04:54.450864   20973 main.go:141] libmachine: (addons-860537) DBG |   </ip>
	I0717 00:04:54.450871   20973 main.go:141] libmachine: (addons-860537) DBG |   
	I0717 00:04:54.450944   20973 main.go:141] libmachine: (addons-860537) DBG | </network>
	I0717 00:04:54.450976   20973 main.go:141] libmachine: (addons-860537) DBG | 
	I0717 00:04:54.456748   20973 main.go:141] libmachine: (addons-860537) DBG | trying to create private KVM network mk-addons-860537 192.168.39.0/24...
	I0717 00:04:54.525501   20973 main.go:141] libmachine: (addons-860537) DBG | private KVM network mk-addons-860537 192.168.39.0/24 created
	I0717 00:04:54.525535   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:54.525458   20994 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:54.525556   20973 main.go:141] libmachine: (addons-860537) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537 ...
	I0717 00:04:54.525575   20973 main.go:141] libmachine: (addons-860537) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:04:54.525652   20973 main.go:141] libmachine: (addons-860537) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 00:04:54.767036   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:54.766915   20994 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa...
	I0717 00:04:55.228897   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:55.228774   20994 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/addons-860537.rawdisk...
	I0717 00:04:55.228923   20973 main.go:141] libmachine: (addons-860537) DBG | Writing magic tar header
	I0717 00:04:55.228937   20973 main.go:141] libmachine: (addons-860537) DBG | Writing SSH key tar header
	I0717 00:04:55.228945   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:55.228887   20994 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537 ...
	I0717 00:04:55.229034   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537
	I0717 00:04:55.229070   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537 (perms=drwx------)
	I0717 00:04:55.229083   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:04:55.229094   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 00:04:55.229109   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:55.229122   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 00:04:55.229136   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:04:55.229154   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:04:55.229167   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 00:04:55.229182   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 00:04:55.229191   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:04:55.229200   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:04:55.229205   20973 main.go:141] libmachine: (addons-860537) Creating domain...
	I0717 00:04:55.229214   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home
	I0717 00:04:55.229228   20973 main.go:141] libmachine: (addons-860537) DBG | Skipping /home - not owner
	I0717 00:04:55.230263   20973 main.go:141] libmachine: (addons-860537) define libvirt domain using xml: 
	I0717 00:04:55.230295   20973 main.go:141] libmachine: (addons-860537) <domain type='kvm'>
	I0717 00:04:55.230306   20973 main.go:141] libmachine: (addons-860537)   <name>addons-860537</name>
	I0717 00:04:55.230312   20973 main.go:141] libmachine: (addons-860537)   <memory unit='MiB'>4000</memory>
	I0717 00:04:55.230320   20973 main.go:141] libmachine: (addons-860537)   <vcpu>2</vcpu>
	I0717 00:04:55.230327   20973 main.go:141] libmachine: (addons-860537)   <features>
	I0717 00:04:55.230335   20973 main.go:141] libmachine: (addons-860537)     <acpi/>
	I0717 00:04:55.230344   20973 main.go:141] libmachine: (addons-860537)     <apic/>
	I0717 00:04:55.230352   20973 main.go:141] libmachine: (addons-860537)     <pae/>
	I0717 00:04:55.230361   20973 main.go:141] libmachine: (addons-860537)     
	I0717 00:04:55.230369   20973 main.go:141] libmachine: (addons-860537)   </features>
	I0717 00:04:55.230383   20973 main.go:141] libmachine: (addons-860537)   <cpu mode='host-passthrough'>
	I0717 00:04:55.230391   20973 main.go:141] libmachine: (addons-860537)   
	I0717 00:04:55.230399   20973 main.go:141] libmachine: (addons-860537)   </cpu>
	I0717 00:04:55.230407   20973 main.go:141] libmachine: (addons-860537)   <os>
	I0717 00:04:55.230414   20973 main.go:141] libmachine: (addons-860537)     <type>hvm</type>
	I0717 00:04:55.230426   20973 main.go:141] libmachine: (addons-860537)     <boot dev='cdrom'/>
	I0717 00:04:55.230435   20973 main.go:141] libmachine: (addons-860537)     <boot dev='hd'/>
	I0717 00:04:55.230447   20973 main.go:141] libmachine: (addons-860537)     <bootmenu enable='no'/>
	I0717 00:04:55.230456   20973 main.go:141] libmachine: (addons-860537)   </os>
	I0717 00:04:55.230464   20973 main.go:141] libmachine: (addons-860537)   <devices>
	I0717 00:04:55.230478   20973 main.go:141] libmachine: (addons-860537)     <disk type='file' device='cdrom'>
	I0717 00:04:55.230495   20973 main.go:141] libmachine: (addons-860537)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/boot2docker.iso'/>
	I0717 00:04:55.230505   20973 main.go:141] libmachine: (addons-860537)       <target dev='hdc' bus='scsi'/>
	I0717 00:04:55.230513   20973 main.go:141] libmachine: (addons-860537)       <readonly/>
	I0717 00:04:55.230524   20973 main.go:141] libmachine: (addons-860537)     </disk>
	I0717 00:04:55.230538   20973 main.go:141] libmachine: (addons-860537)     <disk type='file' device='disk'>
	I0717 00:04:55.230554   20973 main.go:141] libmachine: (addons-860537)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:04:55.230569   20973 main.go:141] libmachine: (addons-860537)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/addons-860537.rawdisk'/>
	I0717 00:04:55.230580   20973 main.go:141] libmachine: (addons-860537)       <target dev='hda' bus='virtio'/>
	I0717 00:04:55.230588   20973 main.go:141] libmachine: (addons-860537)     </disk>
	I0717 00:04:55.230594   20973 main.go:141] libmachine: (addons-860537)     <interface type='network'>
	I0717 00:04:55.230605   20973 main.go:141] libmachine: (addons-860537)       <source network='mk-addons-860537'/>
	I0717 00:04:55.230620   20973 main.go:141] libmachine: (addons-860537)       <model type='virtio'/>
	I0717 00:04:55.230633   20973 main.go:141] libmachine: (addons-860537)     </interface>
	I0717 00:04:55.230643   20973 main.go:141] libmachine: (addons-860537)     <interface type='network'>
	I0717 00:04:55.230665   20973 main.go:141] libmachine: (addons-860537)       <source network='default'/>
	I0717 00:04:55.230675   20973 main.go:141] libmachine: (addons-860537)       <model type='virtio'/>
	I0717 00:04:55.230705   20973 main.go:141] libmachine: (addons-860537)     </interface>
	I0717 00:04:55.230726   20973 main.go:141] libmachine: (addons-860537)     <serial type='pty'>
	I0717 00:04:55.230735   20973 main.go:141] libmachine: (addons-860537)       <target port='0'/>
	I0717 00:04:55.230747   20973 main.go:141] libmachine: (addons-860537)     </serial>
	I0717 00:04:55.230759   20973 main.go:141] libmachine: (addons-860537)     <console type='pty'>
	I0717 00:04:55.230768   20973 main.go:141] libmachine: (addons-860537)       <target type='serial' port='0'/>
	I0717 00:04:55.230776   20973 main.go:141] libmachine: (addons-860537)     </console>
	I0717 00:04:55.230781   20973 main.go:141] libmachine: (addons-860537)     <rng model='virtio'>
	I0717 00:04:55.230788   20973 main.go:141] libmachine: (addons-860537)       <backend model='random'>/dev/random</backend>
	I0717 00:04:55.230793   20973 main.go:141] libmachine: (addons-860537)     </rng>
	I0717 00:04:55.230798   20973 main.go:141] libmachine: (addons-860537)     
	I0717 00:04:55.230810   20973 main.go:141] libmachine: (addons-860537)     
	I0717 00:04:55.230834   20973 main.go:141] libmachine: (addons-860537)   </devices>
	I0717 00:04:55.230851   20973 main.go:141] libmachine: (addons-860537) </domain>
	I0717 00:04:55.230865   20973 main.go:141] libmachine: (addons-860537) 
	I0717 00:04:55.236742   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:60:f7:22 in network default
	I0717 00:04:55.237381   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:55.237402   20973 main.go:141] libmachine: (addons-860537) Ensuring networks are active...
	I0717 00:04:55.238094   20973 main.go:141] libmachine: (addons-860537) Ensuring network default is active
	I0717 00:04:55.238406   20973 main.go:141] libmachine: (addons-860537) Ensuring network mk-addons-860537 is active
	I0717 00:04:55.238906   20973 main.go:141] libmachine: (addons-860537) Getting domain xml...
	I0717 00:04:55.239654   20973 main.go:141] libmachine: (addons-860537) Creating domain...
	I0717 00:04:56.643575   20973 main.go:141] libmachine: (addons-860537) Waiting to get IP...
	I0717 00:04:56.644319   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:56.644724   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:56.644764   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:56.644703   20994 retry.go:31] will retry after 258.934541ms: waiting for machine to come up
	I0717 00:04:56.905312   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:56.905759   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:56.905787   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:56.905721   20994 retry.go:31] will retry after 290.950508ms: waiting for machine to come up
	I0717 00:04:57.198168   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:57.198554   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:57.198582   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:57.198510   20994 retry.go:31] will retry after 392.511162ms: waiting for machine to come up
	I0717 00:04:57.593008   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:57.593478   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:57.593507   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:57.593427   20994 retry.go:31] will retry after 536.216901ms: waiting for machine to come up
	I0717 00:04:58.131098   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:58.131550   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:58.131573   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:58.131500   20994 retry.go:31] will retry after 486.129485ms: waiting for machine to come up
	I0717 00:04:58.619211   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:58.619623   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:58.619650   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:58.619574   20994 retry.go:31] will retry after 643.494017ms: waiting for machine to come up
	I0717 00:04:59.265036   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:59.265495   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:59.265523   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:59.265457   20994 retry.go:31] will retry after 750.648926ms: waiting for machine to come up
	I0717 00:05:00.017338   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:00.017711   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:00.017750   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:00.017670   20994 retry.go:31] will retry after 1.031561955s: waiting for machine to come up
	I0717 00:05:01.050504   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:01.050994   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:01.051023   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:01.050924   20994 retry.go:31] will retry after 1.467936025s: waiting for machine to come up
	I0717 00:05:02.519944   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:02.520329   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:02.520350   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:02.520300   20994 retry.go:31] will retry after 1.680538008s: waiting for machine to come up
	I0717 00:05:04.202850   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:04.203293   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:04.203330   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:04.203259   20994 retry.go:31] will retry after 2.183867343s: waiting for machine to come up
	I0717 00:05:06.388764   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:06.389189   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:06.389212   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:06.389150   20994 retry.go:31] will retry after 2.378398435s: waiting for machine to come up
	I0717 00:05:08.770797   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:08.771325   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:08.771343   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:08.771294   20994 retry.go:31] will retry after 3.027010323s: waiting for machine to come up
	I0717 00:05:11.802107   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:11.802574   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:11.802602   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:11.802523   20994 retry.go:31] will retry after 3.456497207s: waiting for machine to come up
	I0717 00:05:15.260945   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.261431   20973 main.go:141] libmachine: (addons-860537) Found IP for machine: 192.168.39.251
	I0717 00:05:15.261456   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has current primary IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.261465   20973 main.go:141] libmachine: (addons-860537) Reserving static IP address...
	I0717 00:05:15.261901   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find host DHCP lease matching {name: "addons-860537", mac: "52:54:00:fb:b6:26", ip: "192.168.39.251"} in network mk-addons-860537
	I0717 00:05:15.335142   20973 main.go:141] libmachine: (addons-860537) DBG | Getting to WaitForSSH function...
	I0717 00:05:15.335171   20973 main.go:141] libmachine: (addons-860537) Reserved static IP address: 192.168.39.251
	I0717 00:05:15.335193   20973 main.go:141] libmachine: (addons-860537) Waiting for SSH to be available...
	I0717 00:05:15.337627   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.338007   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.338036   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.338241   20973 main.go:141] libmachine: (addons-860537) DBG | Using SSH client type: external
	I0717 00:05:15.338263   20973 main.go:141] libmachine: (addons-860537) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa (-rw-------)
	I0717 00:05:15.338303   20973 main.go:141] libmachine: (addons-860537) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:05:15.338314   20973 main.go:141] libmachine: (addons-860537) DBG | About to run SSH command:
	I0717 00:05:15.338325   20973 main.go:141] libmachine: (addons-860537) DBG | exit 0
	I0717 00:05:15.477154   20973 main.go:141] libmachine: (addons-860537) DBG | SSH cmd err, output: <nil>: 
	I0717 00:05:15.477444   20973 main.go:141] libmachine: (addons-860537) KVM machine creation complete!
	I0717 00:05:15.477716   20973 main.go:141] libmachine: (addons-860537) Calling .GetConfigRaw
	I0717 00:05:15.478279   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:15.478482   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:15.478628   20973 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:05:15.478643   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:15.480235   20973 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:05:15.480249   20973 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:05:15.480259   20973 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:05:15.480265   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.482863   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.483330   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.483361   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.483501   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:15.483690   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.483851   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.484025   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:15.484184   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:15.484364   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:15.484376   20973 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:05:15.599935   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:05:15.599965   20973 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:05:15.599975   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.602725   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.603097   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.603127   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.603313   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:15.603523   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.603719   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.603827   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:15.604030   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:15.604230   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:15.604242   20973 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:05:15.721476   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:05:15.721559   20973 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:05:15.721570   20973 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:05:15.721576   20973 main.go:141] libmachine: (addons-860537) Calling .GetMachineName
	I0717 00:05:15.721800   20973 buildroot.go:166] provisioning hostname "addons-860537"
	I0717 00:05:15.721829   20973 main.go:141] libmachine: (addons-860537) Calling .GetMachineName
	I0717 00:05:15.722011   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.724258   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.724614   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.724634   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.724808   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:15.724998   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.725147   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.725267   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:15.725403   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:15.725598   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:15.725616   20973 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-860537 && echo "addons-860537" | sudo tee /etc/hostname
	I0717 00:05:15.857577   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-860537
	
	I0717 00:05:15.857604   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.860046   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.860371   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.860397   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.860582   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:15.860780   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.860919   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.861033   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:15.861182   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:15.861338   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:15.861353   20973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-860537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-860537/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-860537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:05:15.994136   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:05:15.994166   20973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:05:15.994193   20973 buildroot.go:174] setting up certificates
	I0717 00:05:15.994206   20973 provision.go:84] configureAuth start
	I0717 00:05:15.994215   20973 main.go:141] libmachine: (addons-860537) Calling .GetMachineName
	I0717 00:05:15.994482   20973 main.go:141] libmachine: (addons-860537) Calling .GetIP
	I0717 00:05:15.997035   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.997382   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.997406   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.997620   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.999815   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.000166   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.000199   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.000335   20973 provision.go:143] copyHostCerts
	I0717 00:05:16.000429   20973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:05:16.000599   20973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:05:16.000688   20973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:05:16.000754   20973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.addons-860537 san=[127.0.0.1 192.168.39.251 addons-860537 localhost minikube]
	I0717 00:05:16.206355   20973 provision.go:177] copyRemoteCerts
	I0717 00:05:16.206422   20973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:05:16.206450   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:16.209390   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.209827   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.209851   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.210106   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:16.210335   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.210500   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:16.210669   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:16.299799   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:05:16.325505   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:05:16.352381   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:05:16.383320   20973 provision.go:87] duration metric: took 389.100847ms to configureAuth
	I0717 00:05:16.383352   20973 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:05:16.383544   20973 config.go:182] Loaded profile config "addons-860537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:16.383627   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:16.386526   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.386851   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.386880   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.387088   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:16.387331   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.387500   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.387669   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:16.387846   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:16.388042   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:16.388057   20973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:05:16.739279   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:05:16.739306   20973 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:05:16.739316   20973 main.go:141] libmachine: (addons-860537) Calling .GetURL
	I0717 00:05:16.740610   20973 main.go:141] libmachine: (addons-860537) DBG | Using libvirt version 6000000
	I0717 00:05:16.742872   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.743185   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.743212   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.743352   20973 main.go:141] libmachine: Docker is up and running!
	I0717 00:05:16.743369   20973 main.go:141] libmachine: Reticulating splines...
	I0717 00:05:16.743382   20973 client.go:171] duration metric: took 22.762896786s to LocalClient.Create
	I0717 00:05:16.743407   20973 start.go:167] duration metric: took 22.762959111s to libmachine.API.Create "addons-860537"
	I0717 00:05:16.743417   20973 start.go:293] postStartSetup for "addons-860537" (driver="kvm2")
	I0717 00:05:16.743427   20973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:05:16.743444   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:16.743693   20973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:05:16.743716   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:16.745943   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.746222   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.746243   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.746385   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:16.746569   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.746727   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:16.746877   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:16.843604   20973 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:05:16.849203   20973 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:05:16.849274   20973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:05:16.849445   20973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:05:16.849511   20973 start.go:296] duration metric: took 106.08635ms for postStartSetup
	I0717 00:05:16.849552   20973 main.go:141] libmachine: (addons-860537) Calling .GetConfigRaw
	I0717 00:05:16.881903   20973 main.go:141] libmachine: (addons-860537) Calling .GetIP
	I0717 00:05:16.884588   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.884896   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.884939   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.885171   20973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/config.json ...
	I0717 00:05:16.885509   20973 start.go:128] duration metric: took 22.923294718s to createHost
	I0717 00:05:16.885543   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:16.887940   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.888331   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.888354   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.888589   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:16.888802   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.888976   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.889124   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:16.889286   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:16.889487   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:16.889501   20973 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:05:17.014902   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721174716.984670982
	
	I0717 00:05:17.014929   20973 fix.go:216] guest clock: 1721174716.984670982
	I0717 00:05:17.014938   20973 fix.go:229] Guest: 2024-07-17 00:05:16.984670982 +0000 UTC Remote: 2024-07-17 00:05:16.885527734 +0000 UTC m=+23.025858730 (delta=99.143248ms)
	I0717 00:05:17.014987   20973 fix.go:200] guest clock delta is within tolerance: 99.143248ms
	I0717 00:05:17.014994   20973 start.go:83] releasing machines lock for "addons-860537", held for 23.052855063s
	I0717 00:05:17.015017   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:17.015304   20973 main.go:141] libmachine: (addons-860537) Calling .GetIP
	I0717 00:05:17.018066   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.018522   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:17.018551   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.018674   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:17.019285   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:17.019463   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:17.019545   20973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:05:17.019593   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:17.019728   20973 ssh_runner.go:195] Run: cat /version.json
	I0717 00:05:17.019751   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:17.022637   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.022938   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.022981   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:17.022998   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.023164   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:17.023274   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:17.023327   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.023339   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:17.023511   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:17.023567   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:17.023714   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:17.023728   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:17.023902   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:17.024036   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:17.106071   20973 ssh_runner.go:195] Run: systemctl --version
	I0717 00:05:17.136430   20973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:05:17.476801   20973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:05:17.483890   20973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:05:17.483968   20973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:05:17.501021   20973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:05:17.501049   20973 start.go:495] detecting cgroup driver to use...
	I0717 00:05:17.501182   20973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:05:17.517644   20973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:05:17.533803   20973 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:05:17.533866   20973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:05:17.548763   20973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:05:17.563593   20973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:05:17.678436   20973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:05:17.839660   20973 docker.go:233] disabling docker service ...
	I0717 00:05:17.839733   20973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:05:17.854747   20973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:05:17.868435   20973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:05:17.997646   20973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:05:18.120026   20973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:05:18.134485   20973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:05:18.154165   20973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:05:18.154233   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.165599   20973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:05:18.165651   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.176888   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.188288   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.200053   20973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:05:18.211625   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.222943   20973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.241750   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.252983   20973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:05:18.263723   20973 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:05:18.263770   20973 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:05:18.277816   20973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:05:18.288297   20973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:18.410820   20973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:05:18.552182   20973 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:05:18.552272   20973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:05:18.557056   20973 start.go:563] Will wait 60s for crictl version
	I0717 00:05:18.557125   20973 ssh_runner.go:195] Run: which crictl
	I0717 00:05:18.560934   20973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:05:18.602512   20973 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:05:18.602644   20973 ssh_runner.go:195] Run: crio --version
	I0717 00:05:18.633032   20973 ssh_runner.go:195] Run: crio --version
	I0717 00:05:18.670310   20973 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:05:18.671415   20973 main.go:141] libmachine: (addons-860537) Calling .GetIP
	I0717 00:05:18.673770   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:18.674125   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:18.674151   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:18.674328   20973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:05:18.678889   20973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:05:18.692339   20973 kubeadm.go:883] updating cluster {Name:addons-860537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:05:18.692446   20973 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:18.692486   20973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:05:18.726967   20973 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 00:05:18.727037   20973 ssh_runner.go:195] Run: which lz4
	I0717 00:05:18.731110   20973 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 00:05:18.735566   20973 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 00:05:18.735601   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 00:05:20.108232   20973 crio.go:462] duration metric: took 1.377149319s to copy over tarball
	I0717 00:05:20.108291   20973 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 00:05:22.518305   20973 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.40998787s)
	I0717 00:05:22.518331   20973 crio.go:469] duration metric: took 2.410075699s to extract the tarball
	I0717 00:05:22.518338   20973 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 00:05:22.556149   20973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:05:22.599182   20973 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:05:22.599202   20973 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:05:22.599218   20973 kubeadm.go:934] updating node { 192.168.39.251 8443 v1.30.2 crio true true} ...
	I0717 00:05:22.599344   20973 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-860537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:05:22.599436   20973 ssh_runner.go:195] Run: crio config
	I0717 00:05:22.657652   20973 cni.go:84] Creating CNI manager for ""
	I0717 00:05:22.657671   20973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:05:22.657681   20973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:05:22.657700   20973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.251 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-860537 NodeName:addons-860537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:05:22.657877   20973 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-860537"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:05:22.657956   20973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:05:22.669211   20973 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:05:22.669281   20973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:05:22.680352   20973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 00:05:22.698398   20973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:05:22.716983   20973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 00:05:22.735225   20973 ssh_runner.go:195] Run: grep 192.168.39.251	control-plane.minikube.internal$ /etc/hosts
	I0717 00:05:22.739595   20973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:05:22.753033   20973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:22.879307   20973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:05:22.902008   20973 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537 for IP: 192.168.39.251
	I0717 00:05:22.902034   20973 certs.go:194] generating shared ca certs ...
	I0717 00:05:22.902054   20973 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:22.902214   20973 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:05:22.968659   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt ...
	I0717 00:05:22.968685   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt: {Name:mkb7c35c1fe3bf75bf3e04708011446ecd5a1fcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:22.968846   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key ...
	I0717 00:05:22.968861   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key: {Name:mka3fe2df73604c22d5a52d9cb761bfc181c1060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:22.968961   20973 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:05:23.265092   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt ...
	I0717 00:05:23.265121   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt: {Name:mk3ed0d6da8881d88824249cab7761b1991364f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.265283   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key ...
	I0717 00:05:23.265293   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key: {Name:mk1f22362e41d60983244fa20bd685145d625754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.265358   20973 certs.go:256] generating profile certs ...
	I0717 00:05:23.265410   20973 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.key
	I0717 00:05:23.265424   20973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt with IP's: []
	I0717 00:05:23.553801   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt ...
	I0717 00:05:23.553827   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: {Name:mk26376f17f48227b1a5d85414766d77a530de49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.553972   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.key ...
	I0717 00:05:23.553983   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.key: {Name:mk58d4e284faecb0ba73852a57d5f096053a25e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.554052   20973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key.6221c5d3
	I0717 00:05:23.554069   20973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt.6221c5d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251]
	I0717 00:05:23.661857   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt.6221c5d3 ...
	I0717 00:05:23.661884   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt.6221c5d3: {Name:mkd09d1f2d8206034dd1c6a9032cfa8fd793256e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.662041   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key.6221c5d3 ...
	I0717 00:05:23.662055   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key.6221c5d3: {Name:mkb4d15842b2d89a1261df747d61a0afa14b0c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.662121   20973 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt.6221c5d3 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt
	I0717 00:05:23.662193   20973 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key.6221c5d3 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key
	I0717 00:05:23.662240   20973 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.key
	I0717 00:05:23.662253   20973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.crt with IP's: []
	I0717 00:05:23.957663   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.crt ...
	I0717 00:05:23.957688   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.crt: {Name:mk774f090fa46b32ce8968ba55230a177d7df948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.965032   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.key ...
	I0717 00:05:23.965055   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.key: {Name:mk2b818ab8fa9e51d9da1ce250e1da4098e5a12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.965301   20973 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:05:23.965340   20973 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:05:23.965367   20973 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:05:23.965398   20973 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:05:23.966102   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:05:23.996149   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:05:24.020622   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:05:24.043956   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:05:24.068159   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:05:24.093119   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:05:24.118776   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:05:24.146180   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:05:24.174146   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:05:24.198258   20973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:05:24.215186   20973 ssh_runner.go:195] Run: openssl version
	I0717 00:05:24.221129   20973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:05:24.232345   20973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:24.237158   20973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:24.237224   20973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:24.243267   20973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:05:24.254250   20973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:05:24.258354   20973 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:05:24.258416   20973 kubeadm.go:392] StartCluster: {Name:addons-860537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:24.258508   20973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:05:24.258560   20973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:05:24.302067   20973 cri.go:89] found id: ""
	I0717 00:05:24.302157   20973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:05:24.312619   20973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:05:24.322258   20973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:05:24.331834   20973 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:05:24.331858   20973 kubeadm.go:157] found existing configuration files:
	
	I0717 00:05:24.331906   20973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:05:24.340769   20973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:05:24.340834   20973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:05:24.349950   20973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:05:24.358750   20973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:05:24.358810   20973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:05:24.367676   20973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:05:24.376313   20973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:05:24.376363   20973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:05:24.385733   20973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:05:24.394723   20973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:05:24.394779   20973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:05:24.404003   20973 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 00:05:24.611986   20973 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:05:35.416833   20973 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:05:35.416947   20973 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:05:35.417043   20973 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:05:35.417141   20973 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:05:35.417267   20973 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:05:35.417333   20973 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:05:35.418906   20973 out.go:204]   - Generating certificates and keys ...
	I0717 00:05:35.418981   20973 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:05:35.419082   20973 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:05:35.419187   20973 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:05:35.419262   20973 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:05:35.419349   20973 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:05:35.419428   20973 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:05:35.419508   20973 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:05:35.419630   20973 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-860537 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I0717 00:05:35.419675   20973 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:05:35.419798   20973 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-860537 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I0717 00:05:35.419883   20973 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:05:35.419967   20973 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:05:35.420016   20973 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:05:35.420063   20973 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:05:35.420106   20973 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:05:35.420153   20973 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:05:35.420224   20973 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:05:35.420314   20973 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:05:35.420379   20973 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:05:35.420467   20973 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:05:35.420544   20973 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:05:35.421929   20973 out.go:204]   - Booting up control plane ...
	I0717 00:05:35.422038   20973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:05:35.422132   20973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:05:35.422208   20973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:05:35.422333   20973 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:05:35.422447   20973 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:05:35.422523   20973 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:05:35.422628   20973 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:05:35.422715   20973 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:05:35.422798   20973 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.586518ms
	I0717 00:05:35.422886   20973 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:05:35.422934   20973 kubeadm.go:310] [api-check] The API server is healthy after 5.50219884s
	I0717 00:05:35.423034   20973 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:05:35.423144   20973 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:05:35.423193   20973 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:05:35.423346   20973 kubeadm.go:310] [mark-control-plane] Marking the node addons-860537 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:05:35.423417   20973 kubeadm.go:310] [bootstrap-token] Using token: ti7zy9.5fookdc00rt06u2m
	I0717 00:05:35.425681   20973 out.go:204]   - Configuring RBAC rules ...
	I0717 00:05:35.425796   20973 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:05:35.425902   20973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:05:35.426064   20973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:05:35.426204   20973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:05:35.426362   20973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:05:35.426471   20973 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:05:35.426608   20973 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:05:35.426679   20973 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:05:35.426743   20973 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:05:35.426754   20973 kubeadm.go:310] 
	I0717 00:05:35.426830   20973 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:05:35.426841   20973 kubeadm.go:310] 
	I0717 00:05:35.426953   20973 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:05:35.426965   20973 kubeadm.go:310] 
	I0717 00:05:35.427018   20973 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:05:35.427098   20973 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:05:35.427176   20973 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:05:35.427193   20973 kubeadm.go:310] 
	I0717 00:05:35.427262   20973 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:05:35.427271   20973 kubeadm.go:310] 
	I0717 00:05:35.427342   20973 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:05:35.427356   20973 kubeadm.go:310] 
	I0717 00:05:35.427430   20973 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:05:35.427560   20973 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:05:35.427657   20973 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:05:35.427669   20973 kubeadm.go:310] 
	I0717 00:05:35.427780   20973 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:05:35.427889   20973 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:05:35.427908   20973 kubeadm.go:310] 
	I0717 00:05:35.428011   20973 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ti7zy9.5fookdc00rt06u2m \
	I0717 00:05:35.428132   20973 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 00:05:35.428174   20973 kubeadm.go:310] 	--control-plane 
	I0717 00:05:35.428186   20973 kubeadm.go:310] 
	I0717 00:05:35.428307   20973 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:05:35.428317   20973 kubeadm.go:310] 
	I0717 00:05:35.428422   20973 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ti7zy9.5fookdc00rt06u2m \
	I0717 00:05:35.428611   20973 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 00:05:35.428626   20973 cni.go:84] Creating CNI manager for ""
	I0717 00:05:35.428635   20973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:05:35.430735   20973 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 00:05:35.431978   20973 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 00:05:35.443102   20973 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 00:05:35.463128   20973 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:05:35.463193   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:35.463248   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-860537 minikube.k8s.io/updated_at=2024_07_17T00_05_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=addons-860537 minikube.k8s.io/primary=true
	I0717 00:05:35.497423   20973 ops.go:34] apiserver oom_adj: -16
	I0717 00:05:35.592128   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:36.092464   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:36.592777   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:37.092251   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:37.592795   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:38.092344   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:38.592236   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:39.093036   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:39.592280   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:40.093088   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:40.592163   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:41.092581   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:41.592992   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:42.093107   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:42.592421   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:43.092941   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:43.592907   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:44.093116   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:44.592480   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:45.093036   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:45.592615   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:46.092437   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:46.592996   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:47.092879   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:47.592484   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:48.093007   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:48.592482   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:48.687372   20973 kubeadm.go:1113] duration metric: took 13.224231507s to wait for elevateKubeSystemPrivileges
	I0717 00:05:48.687416   20973 kubeadm.go:394] duration metric: took 24.42900477s to StartCluster
	I0717 00:05:48.687440   20973 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:48.687580   20973 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:05:48.688043   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:48.688280   20973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:05:48.688303   20973 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:05:48.688355   20973 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:05:48.688454   20973 addons.go:69] Setting yakd=true in profile "addons-860537"
	I0717 00:05:48.688473   20973 addons.go:69] Setting inspektor-gadget=true in profile "addons-860537"
	I0717 00:05:48.688495   20973 addons.go:234] Setting addon yakd=true in "addons-860537"
	I0717 00:05:48.688501   20973 config.go:182] Loaded profile config "addons-860537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:48.688504   20973 addons.go:234] Setting addon inspektor-gadget=true in "addons-860537"
	I0717 00:05:48.688530   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.688533   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.688536   20973 addons.go:69] Setting volcano=true in profile "addons-860537"
	I0717 00:05:48.688497   20973 addons.go:69] Setting storage-provisioner=true in profile "addons-860537"
	I0717 00:05:48.688571   20973 addons.go:234] Setting addon volcano=true in "addons-860537"
	I0717 00:05:48.688564   20973 addons.go:69] Setting gcp-auth=true in profile "addons-860537"
	I0717 00:05:48.688594   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.688601   20973 addons.go:234] Setting addon storage-provisioner=true in "addons-860537"
	I0717 00:05:48.688603   20973 mustload.go:65] Loading cluster: addons-860537
	I0717 00:05:48.688649   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.688826   20973 config.go:182] Loaded profile config "addons-860537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:48.688994   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689020   20973 addons.go:69] Setting volumesnapshots=true in profile "addons-860537"
	I0717 00:05:48.689020   20973 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-860537"
	I0717 00:05:48.689030   20973 addons.go:69] Setting helm-tiller=true in profile "addons-860537"
	I0717 00:05:48.689039   20973 addons.go:234] Setting addon volumesnapshots=true in "addons-860537"
	I0717 00:05:48.689080   20973 addons.go:69] Setting registry=true in profile "addons-860537"
	I0717 00:05:48.689094   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689108   20973 addons.go:234] Setting addon registry=true in "addons-860537"
	I0717 00:05:48.689135   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.689159   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689230   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689042   20973 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-860537"
	I0717 00:05:48.689257   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689082   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689099   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.689601   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689021   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689047   20973 addons.go:69] Setting ingress=true in profile "addons-860537"
	I0717 00:05:48.689780   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689054   20973 addons.go:69] Setting ingress-dns=true in profile "addons-860537"
	I0717 00:05:48.690278   20973 addons.go:234] Setting addon ingress-dns=true in "addons-860537"
	I0717 00:05:48.690373   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.689060   20973 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-860537"
	I0717 00:05:48.689064   20973 addons.go:69] Setting cloud-spanner=true in profile "addons-860537"
	I0717 00:05:48.690502   20973 addons.go:234] Setting addon cloud-spanner=true in "addons-860537"
	I0717 00:05:48.689624   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.690525   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.689065   20973 addons.go:234] Setting addon helm-tiller=true in "addons-860537"
	I0717 00:05:48.689063   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.690569   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.690573   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.691011   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.691046   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689070   20973 addons.go:69] Setting default-storageclass=true in profile "addons-860537"
	I0717 00:05:48.691869   20973 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-860537"
	I0717 00:05:48.692781   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.692979   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.692311   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.693066   20973 out.go:177] * Verifying Kubernetes components...
	I0717 00:05:48.693088   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689790   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.692813   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689076   20973 addons.go:69] Setting metrics-server=true in profile "addons-860537"
	I0717 00:05:48.693506   20973 addons.go:234] Setting addon metrics-server=true in "addons-860537"
	I0717 00:05:48.693548   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.690140   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.690152   20973 addons.go:234] Setting addon ingress=true in "addons-860537"
	I0717 00:05:48.691749   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.691797   20973 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-860537"
	I0717 00:05:48.694381   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.694493   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.701796   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.691819   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.704789   20973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:48.689073   20973 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-860537"
	I0717 00:05:48.708312   20973 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-860537"
	I0717 00:05:48.708377   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.709015   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.709077   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.710634   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0717 00:05:48.711074   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.711800   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.711825   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.712198   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.712818   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.712855   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.713719   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43699
	I0717 00:05:48.714407   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.715006   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.715043   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.715398   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.716131   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.716213   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.717095   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.717141   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.717262   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0717 00:05:48.717776   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.717795   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.718130   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.718439   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.718471   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.729994   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33315
	I0717 00:05:48.730211   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42517
	I0717 00:05:48.730387   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.730401   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.730812   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.731359   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.731492   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.731522   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.731911   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.731940   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.731958   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.732023   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0717 00:05:48.732363   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.732366   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.732714   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.732729   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.733651   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.734072   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.734100   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.735707   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.736110   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.736130   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.737047   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.737064   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.737142   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0717 00:05:48.737323   20973 addons.go:234] Setting addon default-storageclass=true in "addons-860537"
	I0717 00:05:48.737361   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.737740   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.737741   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.737779   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.738292   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.738326   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.739644   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0717 00:05:48.740058   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.740641   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.740666   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.741112   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.741128   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.741698   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.741743   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.743924   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0717 00:05:48.744351   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.744879   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.744900   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.745028   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.745046   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.745276   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.745463   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.745495   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.746470   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.746509   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.747952   20973 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-860537"
	I0717 00:05:48.747990   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.748245   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.748291   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.759141   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0717 00:05:48.759647   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.760244   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.760269   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.760631   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.760815   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.763425   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I0717 00:05:48.763783   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.764284   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.764303   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.764662   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.764718   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.764971   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.766633   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.766772   20973 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:05:48.766935   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:48.766950   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:48.767092   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:48.767106   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:48.767114   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:48.767121   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:48.767395   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:48.767408   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 00:05:48.767498   20973 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 00:05:48.768111   20973 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:05:48.768126   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:05:48.768144   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.770941   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.771336   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.771357   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.771515   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.771719   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.771867   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.772048   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.774768   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0717 00:05:48.775390   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.775890   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.775903   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.776231   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.776871   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.776922   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.777805   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38363
	I0717 00:05:48.778226   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.778636   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.778657   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.779081   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.779641   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.779680   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.782428   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0717 00:05:48.782780   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.783199   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.783211   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.783539   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.783962   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.783976   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.786368   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
	I0717 00:05:48.786498   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0717 00:05:48.786669   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0717 00:05:48.786805   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.786808   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.787297   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.787305   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.787323   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.787323   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.787481   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.787702   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.787938   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.787999   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.788100   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.789158   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0717 00:05:48.789261   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0717 00:05:48.789439   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.789452   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.789816   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.790288   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.790301   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.790592   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.791095   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.791127   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.791576   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I0717 00:05:48.791691   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.792876   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.792896   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.793200   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.793296   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.793773   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.793791   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.793891   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.793933   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.794373   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.794913   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.794967   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.795257   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.795919   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0717 00:05:48.795926   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0717 00:05:48.796046   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40057
	I0717 00:05:48.796536   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.796650   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.796747   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.796790   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.797224   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.797240   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.797262   20973 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 00:05:48.797305   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.797569   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.797585   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.798049   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.798100   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.798262   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.798635   20973 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:05:48.798784   20973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 00:05:48.798797   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 00:05:48.798814   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.799363   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0717 00:05:48.799560   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.799604   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.800115   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33359
	I0717 00:05:48.800136   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.800442   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.800961   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.800979   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.801375   20973 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:05:48.801470   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.801649   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.802193   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.802208   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.802257   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
	I0717 00:05:48.802620   20973 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:05:48.802641   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:05:48.802659   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.802670   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.802724   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.802725   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.802968   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.803169   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.803183   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.803269   20973 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:05:48.803300   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.803335   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.803532   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.803731   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.803862   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.803876   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.804238   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.804419   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.804519   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.804537   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.804868   20973 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:05:48.804885   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:05:48.804910   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.804972   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.805072   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.805103   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.805263   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.805465   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.805715   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.807377   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.807660   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.807962   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.808025   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.808063   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.808291   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.808594   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.808757   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.808867   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.809348   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.809458   20973 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:05:48.809774   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.809980   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.810066   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.810228   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.810404   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.810443   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:05:48.810561   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.811160   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:05:48.811177   20973 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:05:48.811194   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.812937   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:05:48.814121   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:05:48.814678   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.815181   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.815199   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.815343   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.815597   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.815740   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.815853   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.816978   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:05:48.818345   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 00:05:48.819709   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0717 00:05:48.819717   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:05:48.820267   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.820819   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.820835   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.821216   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.821517   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.822068   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 00:05:48.823332   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:05:48.823466   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.824666   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:05:48.824686   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:05:48.824707   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.825454   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:05:48.826642   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:05:48.826660   20973 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:05:48.826680   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.827987   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.828362   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.828393   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.828638   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.828797   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.828949   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.828955   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0717 00:05:48.829077   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.829442   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0717 00:05:48.829744   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.830269   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.830292   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.830611   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.830649   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.830828   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.831092   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.831253   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.831258   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.831497   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.831520   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.831709   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.831860   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.832432   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.832452   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.832841   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.832889   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.833430   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.834950   20973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:05:48.835389   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.836641   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0717 00:05:48.837370   20973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:48.838793   20973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:48.838944   20973 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:05:48.839045   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0717 00:05:48.839163   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0717 00:05:48.839254   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.839784   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.839936   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.839963   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.840370   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.840395   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.840444   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.840482   20973 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:05:48.840503   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:05:48.840509   20973 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:05:48.840520   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.840524   20973 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:05:48.840626   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.840827   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.840878   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.840998   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.841493   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0717 00:05:48.842382   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.842692   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.843324   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.843721   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.843540   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.843782   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.844096   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.844160   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.844192   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.844690   20973 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:05:48.844707   20973 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:05:48.844716   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.844724   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.845409   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.845435   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0717 00:05:48.845446   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.845467   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.845477   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.845531   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.845962   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.845991   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.845972   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.846025   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.846187   20973 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:05:48.846526   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.846586   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.846604   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.846619   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.846690   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.846835   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.846870   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.846962   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.847032   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.847333   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.847369   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.847857   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.848092   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.848305   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.848899   20973 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:05:48.849341   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.849546   20973 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:05:48.849552   20973 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:05:48.849704   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.850179   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.850205   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.850497   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.850662   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.850710   20973 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:05:48.850723   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:05:48.850736   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.850838   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.850963   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.851334   20973 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:05:48.851337   20973 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:05:48.851466   20973 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:05:48.851482   20973 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:05:48.851492   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:05:48.851503   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.851484   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.852789   20973 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:05:48.852807   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:05:48.852823   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.855517   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.855702   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.856083   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.856122   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.856193   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.856215   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.856457   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.856457   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.856601   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.856788   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.856853   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.856898   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.857115   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.857124   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.857143   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.857166   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.857175   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.857410   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.857405   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.857421   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.857432   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.857447   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.857589   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.857806   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.857832   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.857989   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.858026   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.858155   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	W0717 00:05:48.858796   20973 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41888->192.168.39.251:22: read: connection reset by peer
	I0717 00:05:48.858824   20973 retry.go:31] will retry after 157.983593ms: ssh: handshake failed: read tcp 192.168.39.1:41888->192.168.39.251:22: read: connection reset by peer
	W0717 00:05:49.020376   20973 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41912->192.168.39.251:22: read: connection reset by peer
	I0717 00:05:49.020411   20973 retry.go:31] will retry after 328.328052ms: ssh: handshake failed: read tcp 192.168.39.1:41912->192.168.39.251:22: read: connection reset by peer
	I0717 00:05:49.154190   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:05:49.174373   20973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:05:49.174452   20973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:05:49.251082   20973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 00:05:49.251106   20973 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 00:05:49.283672   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:05:49.305146   20973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:05:49.305170   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:05:49.334802   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:05:49.334832   20973 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:05:49.336791   20973 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:05:49.336809   20973 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:05:49.338650   20973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:05:49.338664   20973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:05:49.340568   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:05:49.340587   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:05:49.363219   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:05:49.369498   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:05:49.379256   20973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:05:49.379282   20973 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 00:05:49.382569   20973 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:05:49.382589   20973 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:05:49.388619   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:05:49.426553   20973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:05:49.426583   20973 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:05:49.445865   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:05:49.464581   20973 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:05:49.464607   20973 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:05:49.525125   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:05:49.525156   20973 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:05:49.565480   20973 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:05:49.565504   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:05:49.573648   20973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:05:49.573680   20973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:05:49.585219   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:05:49.585249   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:05:49.588300   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:05:49.597519   20973 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:05:49.597539   20973 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:05:49.641378   20973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:05:49.641399   20973 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:05:49.700291   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:05:49.700319   20973 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:05:49.729450   20973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:05:49.729477   20973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:05:49.737087   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:05:49.738313   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:05:49.738327   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:05:49.759225   20973 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:05:49.759254   20973 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:05:49.814408   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:05:49.916567   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:05:49.932260   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:05:49.932284   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:05:49.968232   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:05:49.968260   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:05:50.013627   20973 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:05:50.013647   20973 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:05:50.058203   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:05:50.058227   20973 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:05:50.213766   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:05:50.213792   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:05:50.245355   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:05:50.285878   20973 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:05:50.285903   20973 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:05:50.344246   20973 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:50.344278   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:05:50.362038   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:05:50.362065   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:05:50.425836   20973 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:05:50.425853   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:05:50.621765   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:50.635573   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:05:50.657228   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:05:50.657250   20973 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:05:50.822234   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:05:50.822263   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:05:50.885960   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:05:50.885983   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:05:51.060470   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:05:51.060499   20973 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:05:51.330535   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.176305812s)
	I0717 00:05:51.330581   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:51.330595   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:51.330638   20973 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.156184702s)
	I0717 00:05:51.330671   20973 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.156191366s)
	I0717 00:05:51.330686   20973 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 00:05:51.330867   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:51.330882   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:51.330895   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:51.330906   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:51.331630   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:51.331682   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:51.331634   20973 node_ready.go:35] waiting up to 6m0s for node "addons-860537" to be "Ready" ...
	I0717 00:05:51.331653   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:51.336267   20973 node_ready.go:49] node "addons-860537" has status "Ready":"True"
	I0717 00:05:51.336284   20973 node_ready.go:38] duration metric: took 4.577513ms for node "addons-860537" to be "Ready" ...
	I0717 00:05:51.336292   20973 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:05:51.345614   20973 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:51.574110   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:05:51.835993   20973 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-860537" context rescaled to 1 replicas
	I0717 00:05:53.655210   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.371501679s)
	I0717 00:05:53.655266   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:53.655274   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:53.655574   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:53.655598   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:53.655608   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:53.655616   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:53.655622   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:53.655862   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:53.655878   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:53.682419   20973 pod_ready.go:102] pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace has status "Ready":"False"
	I0717 00:05:53.713046   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.349784764s)
	I0717 00:05:53.713116   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:53.713132   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:53.713449   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:53.713478   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:53.713482   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:53.713492   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:53.713501   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:53.713735   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:53.713749   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:55.797574   20973 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:05:55.797616   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:55.800870   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:55.801356   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:55.801385   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:55.801555   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:55.801753   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:55.801893   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:55.802033   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:55.852120   20973 pod_ready.go:102] pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace has status "Ready":"False"
	I0717 00:05:56.185497   20973 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:05:56.355890   20973 addons.go:234] Setting addon gcp-auth=true in "addons-860537"
	I0717 00:05:56.355953   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:56.356410   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:56.356449   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:56.372343   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0717 00:05:56.372787   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:56.373351   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:56.373378   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:56.373769   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:56.374311   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:56.374335   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:56.388828   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0717 00:05:56.389226   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:56.389729   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:56.389747   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:56.390075   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:56.390258   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:56.391864   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:56.392088   20973 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:05:56.392111   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:56.394673   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:56.395039   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:56.395065   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:56.395249   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:56.395427   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:56.395569   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:56.395681   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:56.883632   20973 pod_ready.go:92] pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:56.883663   20973 pod_ready.go:81] duration metric: took 5.538025473s for pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:56.883677   20973 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x569p" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:56.936885   20973 pod_ready.go:92] pod "coredns-7db6d8ff4d-x569p" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:56.936918   20973 pod_ready.go:81] duration metric: took 53.232285ms for pod "coredns-7db6d8ff4d-x569p" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:56.936933   20973 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.008504   20973 pod_ready.go:92] pod "etcd-addons-860537" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.008534   20973 pod_ready.go:81] duration metric: took 71.592091ms for pod "etcd-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.008547   20973 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.038261   20973 pod_ready.go:92] pod "kube-apiserver-addons-860537" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.038282   20973 pod_ready.go:81] duration metric: took 29.727649ms for pod "kube-apiserver-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.038292   20973 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.130106   20973 pod_ready.go:92] pod "kube-controller-manager-addons-860537" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.130144   20973 pod_ready.go:81] duration metric: took 91.844778ms for pod "kube-controller-manager-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.130159   20973 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6kwx2" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.275961   20973 pod_ready.go:92] pod "kube-proxy-6kwx2" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.275985   20973 pod_ready.go:81] duration metric: took 145.817601ms for pod "kube-proxy-6kwx2" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.275997   20973 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.663819   20973 pod_ready.go:92] pod "kube-scheduler-addons-860537" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.663847   20973 pod_ready.go:81] duration metric: took 387.842076ms for pod "kube-scheduler-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.663860   20973 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.759432   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.389899649s)
	I0717 00:05:57.759487   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759503   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759508   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.370859481s)
	I0717 00:05:57.759556   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759572   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759590   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.31369437s)
	I0717 00:05:57.759622   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759634   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759636   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.171302199s)
	I0717 00:05:57.759665   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759669   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.022556742s)
	I0717 00:05:57.759682   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759787   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.945334344s)
	I0717 00:05:57.759812   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759817   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.759826   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759858   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.759867   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.759876   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759884   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759930   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.843339825s)
	I0717 00:05:57.759947   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759955   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.760024   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.514642074s)
	I0717 00:05:57.760040   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.760050   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.760173   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.138377137s)
	W0717 00:05:57.760202   20973 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:05:57.760228   20973 retry.go:31] will retry after 357.546872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:05:57.760308   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.124693251s)
	I0717 00:05:57.760337   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.760376   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.760394   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.760398   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.760408   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.760419   20973 addons.go:475] Verifying addon ingress=true in "addons-860537"
	I0717 00:05:57.760427   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.760446   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.760475   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.760483   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.760492   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.760500   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.760617   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.760629   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.760638   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.760846   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.760858   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.761250   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.761290   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.761312   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.761325   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.761774   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.761824   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.761918   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.761965   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.761985   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.762022   20973 addons.go:475] Verifying addon metrics-server=true in "addons-860537"
	I0717 00:05:57.762203   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.762233   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.762240   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.762248   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.762254   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.763027   20973 out.go:177] * Verifying ingress addon...
	I0717 00:05:57.763209   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763228   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763228   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763241   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.763247   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763251   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.763260   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.763261   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763277   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763281   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.763252   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.763316   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.763319   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763323   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.763338   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763344   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.763350   20973 addons.go:475] Verifying addon registry=true in "addons-860537"
	I0717 00:05:57.763599   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763633   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763642   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764290   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.764307   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764469   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.764480   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.764487   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764495   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.764495   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.764502   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.764525   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.764532   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764539   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.764546   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.764666   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.764671   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.764680   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764961   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.765017   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.765043   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.765441   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.766298   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.765685   20973 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:05:57.765727   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.768030   20973 out.go:177] * Verifying registry addon...
	I0717 00:05:57.768911   20973 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-860537 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:05:57.770465   20973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:05:57.789659   20973 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:05:57.789686   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:57.802267   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.802289   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.802601   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.802681   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.802713   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 00:05:57.802807   20973 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 00:05:57.808245   20973 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:05:57.808270   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:57.822493   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.822519   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.822763   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.822776   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.822783   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:58.118849   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:58.279958   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:58.287494   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:58.795339   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:58.795863   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.359001   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.379427   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:59.444182   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.870026514s)
	I0717 00:05:59.444242   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:59.444256   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:59.444297   20973 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.052184704s)
	I0717 00:05:59.444564   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:59.444586   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:59.444597   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:59.444606   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:59.444609   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:59.444870   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:59.444888   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:59.444903   20973 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-860537"
	I0717 00:05:59.446484   20973 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:05:59.446501   20973 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:05:59.448406   20973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:59.449231   20973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:05:59.450336   20973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:05:59.450354   20973 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:05:59.467443   20973 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:05:59.467474   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:59.630432   20973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:05:59.630469   20973 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:05:59.677617   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:05:59.712260   20973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:05:59.712288   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:05:59.775719   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.780707   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:59.834038   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:05:59.954951   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:00.271736   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:00.275545   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:00.343979   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.225082634s)
	I0717 00:06:00.344021   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:06:00.344033   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:06:00.344306   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:06:00.344328   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:06:00.344339   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:06:00.344348   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:06:00.344354   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:06:00.344588   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:06:00.344601   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:06:00.344622   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:06:00.454957   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:00.795541   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:00.832803   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:00.874657   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.040570421s)
	I0717 00:06:00.874711   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:06:00.874727   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:06:00.875011   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:06:00.875032   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:06:00.875041   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:06:00.875049   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:06:00.875087   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:06:00.875276   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:06:00.875348   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:06:00.875329   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:06:00.876644   20973 addons.go:475] Verifying addon gcp-auth=true in "addons-860537"
	I0717 00:06:00.878268   20973 out.go:177] * Verifying gcp-auth addon...
	I0717 00:06:00.880057   20973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:06:00.916212   20973 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:06:00.916237   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:00.987581   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:01.276954   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:01.281730   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:01.392885   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:01.455200   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:01.687479   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:01.780686   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:01.784290   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:01.891113   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:01.956285   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:02.271369   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:02.286313   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:02.385084   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:02.455158   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:02.770694   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:02.781678   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:02.900885   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:02.967230   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:03.270836   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.274520   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:03.382882   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:03.456330   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:03.771303   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.789189   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:03.884628   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:03.954903   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:04.169899   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:04.270501   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:04.274948   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:04.383864   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:04.454958   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:04.770914   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:04.774483   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:04.883547   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:04.955240   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:05.271098   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:05.275254   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:05.384152   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:05.455267   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:05.771201   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:05.775056   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:05.883968   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:05.956042   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:06.270522   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:06.273674   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:06.384325   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:06.455049   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:06.670726   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:06.770821   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:06.774443   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:06.883797   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:06.954494   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:07.270919   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:07.274486   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:07.695845   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:07.696510   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:07.771311   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:07.773973   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:07.883768   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:07.954633   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:08.271664   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:08.274784   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:08.385723   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:08.454960   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:08.771339   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:08.774316   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:08.883767   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:08.955536   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:09.170426   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:09.271367   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.275638   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:09.384157   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:09.457412   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:09.856151   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.856365   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:09.884481   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:09.955625   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:10.272010   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:10.275464   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:10.383333   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:10.456595   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:10.670169   20973 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:10.670189   20973 pod_ready.go:81] duration metric: took 13.006321739s for pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:10.670196   20973 pod_ready.go:38] duration metric: took 19.333895971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:06:10.670209   20973 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:06:10.670263   20973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:06:10.701286   20973 api_server.go:72] duration metric: took 22.012949714s to wait for apiserver process to appear ...
	I0717 00:06:10.701307   20973 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:06:10.701324   20973 api_server.go:253] Checking apiserver healthz at https://192.168.39.251:8443/healthz ...
	I0717 00:06:10.705334   20973 api_server.go:279] https://192.168.39.251:8443/healthz returned 200:
	ok
	I0717 00:06:10.706257   20973 api_server.go:141] control plane version: v1.30.2
	I0717 00:06:10.706278   20973 api_server.go:131] duration metric: took 4.963458ms to wait for apiserver health ...
	I0717 00:06:10.706287   20973 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:06:10.716668   20973 system_pods.go:59] 18 kube-system pods found
	I0717 00:06:10.716695   20973 system_pods.go:61] "coredns-7db6d8ff4d-x569p" [1e4c6914-ede3-4b0b-b696-83768c15f61f] Running
	I0717 00:06:10.716703   20973 system_pods.go:61] "csi-hostpath-attacher-0" [1e997ac0-7c52-48b7-9a1a-bf461ba09162] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 00:06:10.716709   20973 system_pods.go:61] "csi-hostpath-resizer-0" [b4942844-db70-42e9-b530-db4bcfb28f68] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 00:06:10.716716   20973 system_pods.go:61] "csi-hostpathplugin-spxjk" [01553a53-f10f-43eb-8581-452ce918ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 00:06:10.716724   20973 system_pods.go:61] "etcd-addons-860537" [4ece71aa-f418-49c5-b9d6-328918a4520a] Running
	I0717 00:06:10.716731   20973 system_pods.go:61] "kube-apiserver-addons-860537" [2a014807-df86-4a41-bb77-45cdd720c9bc] Running
	I0717 00:06:10.716736   20973 system_pods.go:61] "kube-controller-manager-addons-860537" [c9390bef-106f-4c8f-b0c7-bdbb3cf6a3a7] Running
	I0717 00:06:10.716748   20973 system_pods.go:61] "kube-ingress-dns-minikube" [a772ebab-91ad-4da1-be93-836f7a6b65a9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 00:06:10.716754   20973 system_pods.go:61] "kube-proxy-6kwx2" [95bc49e4-c111-4184-83f6-14800ece6dc1] Running
	I0717 00:06:10.716761   20973 system_pods.go:61] "kube-scheduler-addons-860537" [f0353750-5ac0-464a-9f2c-1e926a5ba6dc] Running
	I0717 00:06:10.716768   20973 system_pods.go:61] "metrics-server-c59844bb4-zq4m7" [332284a0-4c05-4737-8669-c71012684bb2] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 00:06:10.716777   20973 system_pods.go:61] "nvidia-device-plugin-daemonset-pcbjh" [631d74e8-bdf2-43b3-b053-cdcade929069] Running
	I0717 00:06:10.716786   20973 system_pods.go:61] "registry-proxy-vpbzw" [961d65cb-7faf-4f3a-86ef-8916920fcba6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 00:06:10.716796   20973 system_pods.go:61] "registry-v6n4c" [66c9585d-752a-4ad2-9c99-b9bff568c44d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 00:06:10.716809   20973 system_pods.go:61] "snapshot-controller-745499f584-6fsd7" [a0ab2b73-f917-4c6c-95f8-d516cf54a3f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:06:10.716818   20973 system_pods.go:61] "snapshot-controller-745499f584-z8rr5" [49153b18-e1ad-4512-9ead-6a432b9e0c7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:06:10.716829   20973 system_pods.go:61] "storage-provisioner" [71073df2-0967-430a-94e9-5a3641c16eed] Running
	I0717 00:06:10.716837   20973 system_pods.go:61] "tiller-deploy-6677d64bcd-5nxgc" [77b4eedd-c82b-401f-9057-a7a11b13510b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 00:06:10.716847   20973 system_pods.go:74] duration metric: took 10.554229ms to wait for pod list to return data ...
	I0717 00:06:10.716858   20973 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:06:10.719747   20973 default_sa.go:45] found service account: "default"
	I0717 00:06:10.719767   20973 default_sa.go:55] duration metric: took 2.901867ms for default service account to be created ...
	I0717 00:06:10.719775   20973 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:06:10.732167   20973 system_pods.go:86] 18 kube-system pods found
	I0717 00:06:10.732195   20973 system_pods.go:89] "coredns-7db6d8ff4d-x569p" [1e4c6914-ede3-4b0b-b696-83768c15f61f] Running
	I0717 00:06:10.732206   20973 system_pods.go:89] "csi-hostpath-attacher-0" [1e997ac0-7c52-48b7-9a1a-bf461ba09162] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 00:06:10.732216   20973 system_pods.go:89] "csi-hostpath-resizer-0" [b4942844-db70-42e9-b530-db4bcfb28f68] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 00:06:10.732226   20973 system_pods.go:89] "csi-hostpathplugin-spxjk" [01553a53-f10f-43eb-8581-452ce918ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 00:06:10.732236   20973 system_pods.go:89] "etcd-addons-860537" [4ece71aa-f418-49c5-b9d6-328918a4520a] Running
	I0717 00:06:10.732243   20973 system_pods.go:89] "kube-apiserver-addons-860537" [2a014807-df86-4a41-bb77-45cdd720c9bc] Running
	I0717 00:06:10.732250   20973 system_pods.go:89] "kube-controller-manager-addons-860537" [c9390bef-106f-4c8f-b0c7-bdbb3cf6a3a7] Running
	I0717 00:06:10.732264   20973 system_pods.go:89] "kube-ingress-dns-minikube" [a772ebab-91ad-4da1-be93-836f7a6b65a9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 00:06:10.732273   20973 system_pods.go:89] "kube-proxy-6kwx2" [95bc49e4-c111-4184-83f6-14800ece6dc1] Running
	I0717 00:06:10.732283   20973 system_pods.go:89] "kube-scheduler-addons-860537" [f0353750-5ac0-464a-9f2c-1e926a5ba6dc] Running
	I0717 00:06:10.732293   20973 system_pods.go:89] "metrics-server-c59844bb4-zq4m7" [332284a0-4c05-4737-8669-c71012684bb2] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 00:06:10.732305   20973 system_pods.go:89] "nvidia-device-plugin-daemonset-pcbjh" [631d74e8-bdf2-43b3-b053-cdcade929069] Running
	I0717 00:06:10.732314   20973 system_pods.go:89] "registry-proxy-vpbzw" [961d65cb-7faf-4f3a-86ef-8916920fcba6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 00:06:10.732324   20973 system_pods.go:89] "registry-v6n4c" [66c9585d-752a-4ad2-9c99-b9bff568c44d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 00:06:10.732340   20973 system_pods.go:89] "snapshot-controller-745499f584-6fsd7" [a0ab2b73-f917-4c6c-95f8-d516cf54a3f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:06:10.732353   20973 system_pods.go:89] "snapshot-controller-745499f584-z8rr5" [49153b18-e1ad-4512-9ead-6a432b9e0c7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:06:10.732361   20973 system_pods.go:89] "storage-provisioner" [71073df2-0967-430a-94e9-5a3641c16eed] Running
	I0717 00:06:10.732373   20973 system_pods.go:89] "tiller-deploy-6677d64bcd-5nxgc" [77b4eedd-c82b-401f-9057-a7a11b13510b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 00:06:10.732383   20973 system_pods.go:126] duration metric: took 12.600811ms to wait for k8s-apps to be running ...
	I0717 00:06:10.732397   20973 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:06:10.732446   20973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:06:10.770442   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:10.777108   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:10.778962   20973 system_svc.go:56] duration metric: took 46.562029ms WaitForService to wait for kubelet
	I0717 00:06:10.778982   20973 kubeadm.go:582] duration metric: took 22.090648397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:06:10.779004   20973 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:06:10.783136   20973 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:06:10.783157   20973 node_conditions.go:123] node cpu capacity is 2
	I0717 00:06:10.783167   20973 node_conditions.go:105] duration metric: took 4.158763ms to run NodePressure ...
	I0717 00:06:10.783176   20973 start.go:241] waiting for startup goroutines ...
	I0717 00:06:10.884894   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:10.954274   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:11.270696   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:11.274091   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:11.384697   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:11.460046   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:11.770510   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:11.774870   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:11.885635   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:11.954667   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:12.273270   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:12.278815   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:12.384083   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:12.455613   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:12.771037   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:12.775894   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:12.883766   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:12.955108   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:13.272179   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:13.275126   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:13.384487   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:13.455480   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:13.770604   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:13.774304   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.127552   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.132254   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:14.283771   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:14.286030   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.384010   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.454505   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:14.771615   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:14.774785   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.883525   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.954724   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:15.271228   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:15.275234   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:15.384530   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:15.454276   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:15.770997   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:15.774894   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:15.884048   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:15.956402   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:16.270251   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:16.274555   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:16.383273   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:16.454946   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:16.771252   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:16.775018   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:16.884720   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:16.956410   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:17.271465   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:17.275566   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:17.384760   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:17.454693   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:17.771072   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:17.774747   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:17.883957   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:17.955199   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:18.270824   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:18.274201   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:18.386259   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:18.455479   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:18.780047   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:18.793097   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:18.884250   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:18.955906   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:19.271291   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.275143   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:19.383751   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:19.454682   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:19.771293   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.774461   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:19.884266   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:19.955263   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:20.272011   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:20.275476   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:20.385087   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:20.455942   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:20.771469   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:20.775282   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:20.884137   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:20.956857   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:21.270966   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:21.274644   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:21.383440   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:21.454674   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:21.771154   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:21.777147   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:21.883974   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:21.954751   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:22.272425   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:22.275186   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:22.384110   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:22.454954   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:22.772250   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:22.774672   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:22.883786   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:22.955186   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:23.621433   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:23.622076   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:23.622409   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:23.629748   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:23.770714   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:23.774437   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:23.884061   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:23.955465   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:24.271012   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:24.274430   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:24.383201   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:24.455048   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:24.771144   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:24.775483   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:24.883955   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:24.955238   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:25.270853   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:25.274266   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:25.384519   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:25.454563   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:25.770920   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:25.774858   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:25.884044   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:25.956885   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:26.335532   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:26.343315   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:26.384512   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:26.454247   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:26.770872   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:26.774776   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:26.883623   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:26.954250   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:27.271527   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:27.274948   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:27.383596   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:27.454908   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:27.770265   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:27.774086   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:27.883946   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:27.954469   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:28.270856   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:28.274560   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:28.678904   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:28.679860   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:28.771330   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:28.775705   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:28.884033   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:28.955475   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:29.270176   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:29.274123   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:29.384063   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:29.456230   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:29.770662   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:29.774319   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:29.883929   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:29.955469   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:30.270907   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:30.274591   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:30.383583   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:30.454667   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:30.773176   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:30.778300   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:30.884229   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:30.954778   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:31.271993   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:31.275554   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:31.384353   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:31.455822   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:31.770936   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:31.774447   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:31.883691   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:31.956027   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:32.270884   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:32.274457   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:32.384530   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:32.455529   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:32.771100   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:32.775072   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:32.884058   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:32.954993   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:33.271371   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:33.274303   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:33.384014   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:33.455171   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:33.780199   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:33.784226   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:33.883518   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:33.954765   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:34.271424   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:34.274425   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:34.383596   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:34.454560   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:34.771706   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:34.774750   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:34.885101   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:34.958350   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:35.271245   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:35.274830   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:35.383684   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:35.455469   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:35.807600   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:35.807754   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:35.912794   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:35.966387   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:36.271359   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:36.274836   20973 kapi.go:107] duration metric: took 38.504367856s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:06:36.383542   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:36.455197   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:36.771696   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:36.884414   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:36.954959   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:37.270891   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:37.383603   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:37.454651   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:37.771254   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:37.884151   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:37.954699   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:38.271212   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:38.383607   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:38.455495   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:38.846075   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:38.884206   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:38.956038   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:39.271628   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:39.384915   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:39.455044   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:39.771135   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:39.883804   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:39.954154   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:40.270731   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:40.388583   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:40.454569   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:40.770593   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:40.884320   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:40.962694   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:41.271182   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:41.383525   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:41.454202   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:41.770437   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:41.884544   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:41.955235   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:42.270140   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:42.384629   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:42.454534   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:42.773060   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:42.883738   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:42.954576   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:43.271077   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:43.383788   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:43.454639   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:43.770329   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:43.883343   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:43.954749   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:44.271309   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:44.384920   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:44.454633   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:44.771792   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:44.882878   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:44.954918   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:45.272890   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:45.384093   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:45.454737   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.231550   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:46.231977   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.239003   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.277888   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.387144   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:46.455506   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.770996   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.884195   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:46.957673   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:47.272782   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:47.383872   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:47.455405   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:47.770871   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:47.884553   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:47.954672   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:48.273699   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:48.390993   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:48.455027   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:48.777837   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:48.884569   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:48.954894   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:49.275860   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:49.384761   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:49.455429   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:49.770433   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:49.884577   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:49.954551   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:50.270806   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:50.386619   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:50.464738   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:50.773765   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:50.884220   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:50.964917   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:51.270682   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:51.383642   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:51.456388   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:51.770697   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:51.888696   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:51.960416   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:52.271706   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:52.384741   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:52.455098   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:52.770333   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:52.884069   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:52.954785   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:53.273009   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:53.386712   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:53.455112   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:53.771982   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:53.884513   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:53.955182   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:54.271503   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:54.385481   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:54.454436   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:54.771262   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:54.883632   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:54.954779   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:55.271557   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:55.384456   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:55.454343   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:55.771199   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:55.884289   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:55.955484   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:56.275413   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:56.384305   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:56.454999   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:56.770621   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:56.884516   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:56.955294   20973 kapi.go:107] duration metric: took 57.506059197s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:06:57.270270   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:57.383930   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:57.771289   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:57.884183   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:58.271600   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:58.383046   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:58.771509   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:58.884944   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:59.270654   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:59.384427   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:59.770756   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:59.883691   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:00.270898   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:00.383570   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:00.771218   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:00.885323   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:01.270764   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:01.383489   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:01.772498   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:01.884966   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:02.273616   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:02.384057   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:02.773064   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:02.883155   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:03.273005   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:03.383714   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:03.771438   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:03.888010   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:04.274349   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:04.390944   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:04.777206   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:04.887010   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:05.272171   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:05.387357   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:05.770905   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:05.883443   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:06.270687   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:06.383559   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:06.820714   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:06.884690   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:07.271346   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:07.385509   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:07.771206   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:07.884685   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:08.270235   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.383693   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:08.790118   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.884478   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:09.306631   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.383956   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:09.772425   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.883498   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:10.282960   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:10.392164   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:10.771614   20973 kapi.go:107] duration metric: took 1m13.005926576s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:07:10.885948   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:11.384895   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:11.883732   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:12.383820   20973 kapi.go:107] duration metric: took 1m11.503762856s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:07:12.385435   20973 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-860537 cluster.
	I0717 00:07:12.386678   20973 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:07:12.387811   20973 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:07:12.389336   20973 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, metrics-server, nvidia-device-plugin, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 00:07:12.390488   20973 addons.go:510] duration metric: took 1m23.702134744s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns metrics-server nvidia-device-plugin helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 00:07:12.390524   20973 start.go:246] waiting for cluster config update ...
	I0717 00:07:12.390540   20973 start.go:255] writing updated cluster config ...
	I0717 00:07:12.390791   20973 ssh_runner.go:195] Run: rm -f paused
	I0717 00:07:12.439906   20973 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:07:12.441547   20973 out.go:177] * Done! kubectl is now configured to use "addons-860537" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.310888122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721175021310850347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d348f721-5188-4c93-a890-2adcda6a229b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.311750047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5afa57b-b2d5-4a79-8dc8-cf86005ab56e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.311807032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5afa57b-b2d5-4a79-8dc8-cf86005ab56e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.312137071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72cd5f094e299b8179e884eef96004efa244774d4294b711ff4bbc3af41a0c46,PodSandboxId:e74f7f74515b8a9ddbc3b6d06cd28a0dc55372b7b0a231fcb3a3787473b76523,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721175012544117643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-4hl58,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de2a9e7d-611b-4332-ba3c-d631603eed79,},Annotations:map[string]string{io.kubernetes.container.hash: f230b00f,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aeb9a53a78efb30ea9c9e8d2102c15f47adb5bb24e3e130a88b7b403dbae31,PodSandboxId:41d2b6c3006a33fc552d9fd5e4e865f8d467c62367c3b4e1ce2c7673ead0403b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721174871850489359,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19a96ab4-cd55-4419-b5a7-8b9e8823879f,},Annotations:map[string]string{io.kubernet
es.container.hash: 78f7281c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017f08795f68a8a748ca3978528da32e92543780a43c0a7bb490b2061d5dbed5,PodSandboxId:424a5c95da73b09ff2a452bfa53e0840802ebfc4ee27e19fddba405955f393f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721174848676635811,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-rw54z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 22484240-e20c-4ef5-a0da-50269ed47664,},Annotations:map[string]string{io.kubernetes.container.hash: 49b847eb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e,PodSandboxId:de67c66d118049e89e78e1921b5ce1cb66346dd480b01c8b20204723dbec2db6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721174831597632817,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q5sd8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36d0948a-8a19-4f23-b53e-3a648152fffb,},Annotations:map[string]string{io.kubernetes.container.hash: adcd5a98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93744428c129022203eafb305d53c6b3d3126455899fe8e66edda7ad2f34549,PodSandboxId:db99e22b51abfec93a03914450e8fffa5c2401d35d4ab38f960db45c5aa5b8b2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721174803009552389,Labels:map[string]
string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jqn6l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a87259b4-9d7e-472c-ad5d-cdac88b8d5b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1733cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efec1a7218ba11240305e59bd2e782259b8e3a954de33c9df97a35cd263fb1d9,PodSandboxId:bdb0aa9ddb0c9077ce0f0d44e959d3d476e368ead77207ca24b15dc0f99f8653,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:17
21174802518354462,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fhfp2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50aa8db6-2541-4fd4-85b7-e6894fe54ae0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a8bde3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354b0940f3ebdf913ebdb3f69e24dab26c45d80e7b300db4f838bbb2a6a84e0,PodSandboxId:749c2c994691e6ce06667302acac77001b8f7655df5a3480ac6078efbd0fc599,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf2
6a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721174800132888251,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-dz45b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ba263468-5fa1-4873-a77c-8a7e8c823342,},Annotations:map[string]string{io.kubernetes.container.hash: a97e0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b924521f0efc85318dd15de892045d5cdfeed64a916871904e3aa5a54dd082ff,PodSandboxId:825f2231b6de9c3036ee45c8c9d2229d8a35eae7c247c28138cd8bba2c7b9592,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898c
ff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721174784104359949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-h6wwn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4e01edaf-fd5a-4055-adc7-3814ccc74e83,},Annotations:map[string]string{io.kubernetes.container.hash: 3582c1c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98,PodSandboxId:347ecc25291eb328e696b3b1b011705fda8af3ca4c8febe3af8c56f7475081ae,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c8
9de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721174761108799539,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-zq4m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 332284a0-4c05-4737-8669-c71012684bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d41a249,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762,PodSandboxId:40a89e8b774b2eaf3dcdb95c1e983163d964f50668976d4888e995015a9e298c,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721174757446145150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71073df2-0967-430a-94e9-5a3641c16eed,},Annotations:map[string]string{io.kubernetes.container.hash: babda854,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff,PodSandboxId:59f13eb22ca98f3e40c185c2cffb4fdee151409a08a25f26ea8c7256b8cc7f95,Metadata:&ContainerMetadata{Name:coredns,Att
empt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721174753977793670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x569p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e4c6914-ede3-4b0b-b696-83768c15f61f,},Annotations:map[string]string{io.kubernetes.container.hash: e05fb4eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:90a0b8d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6,PodSandboxId:0775f25ceaca034873b1f2ad4ad7d9c5182c41cc593a5d3f5a13cd4f51e10923,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721174750775514432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kwx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bc49e4-c111-4184-83f6-14800ece6dc1,},Annotations:map[string]string{io.kubernetes.container.hash: c3480f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6262ffd56c7a125e22a281b
77eeaa64a1290bd2861165394c264dba8c5696f,PodSandboxId:8ff22cb5467bd2c46084782c2ba9d24b711e1617234cda0ae434856e0366c202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721174729412037968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d60fd94d932d2ba8608f510ed5f190a,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4f
e5f1ce8b2687a2fc58e2f,PodSandboxId:e1ed3dab6c8e597298b8bd982950ce5eba8403cf7043c5825f919f68cf17712c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721174729407897443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9cd091645e319574d7f043d4df0944d,},Annotations:map[string]string{io.kubernetes.container.hash: 8aa49d05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9,PodSandboxId:f47fd7ea3b45
50df5e19d53a60c4abadc31d8ea21bd7cd329795fde4d861656f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721174729349060746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f96988fa3ac783d3dee6b95d6d3bfb5,},Annotations:map[string]string{io.kubernetes.container.hash: f347147a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8,PodSandboxId:3f3ff8d1f348a8df3b21eafbb8c99
59556d0bc13008539df19fdc49ba79dbb28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721174729240262165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94cfcc47ed48397882029d326991bf1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5afa57b-b2d5-4a79-8dc8-cf86005ab56e name=/runtime.v1.RuntimeS
ervice/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.351570157Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=564cefb8-1c36-4401-93e1-810d423723cf name=/runtime.v1.RuntimeService/Version
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.351667726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=564cefb8-1c36-4401-93e1-810d423723cf name=/runtime.v1.RuntimeService/Version
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.353430223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37bca93f-292f-4b9b-8f52-c3c83ae99e70 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.358500356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721175021358473424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37bca93f-292f-4b9b-8f52-c3c83ae99e70 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.366511070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52716ea1-ccd9-461e-a6ab-9d953d547455 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.366665950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52716ea1-ccd9-461e-a6ab-9d953d547455 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.367079178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72cd5f094e299b8179e884eef96004efa244774d4294b711ff4bbc3af41a0c46,PodSandboxId:e74f7f74515b8a9ddbc3b6d06cd28a0dc55372b7b0a231fcb3a3787473b76523,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721175012544117643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-4hl58,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de2a9e7d-611b-4332-ba3c-d631603eed79,},Annotations:map[string]string{io.kubernetes.container.hash: f230b00f,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aeb9a53a78efb30ea9c9e8d2102c15f47adb5bb24e3e130a88b7b403dbae31,PodSandboxId:41d2b6c3006a33fc552d9fd5e4e865f8d467c62367c3b4e1ce2c7673ead0403b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721174871850489359,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19a96ab4-cd55-4419-b5a7-8b9e8823879f,},Annotations:map[string]string{io.kubernet
es.container.hash: 78f7281c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017f08795f68a8a748ca3978528da32e92543780a43c0a7bb490b2061d5dbed5,PodSandboxId:424a5c95da73b09ff2a452bfa53e0840802ebfc4ee27e19fddba405955f393f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721174848676635811,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-rw54z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 22484240-e20c-4ef5-a0da-50269ed47664,},Annotations:map[string]string{io.kubernetes.container.hash: 49b847eb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e,PodSandboxId:de67c66d118049e89e78e1921b5ce1cb66346dd480b01c8b20204723dbec2db6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721174831597632817,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q5sd8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36d0948a-8a19-4f23-b53e-3a648152fffb,},Annotations:map[string]string{io.kubernetes.container.hash: adcd5a98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93744428c129022203eafb305d53c6b3d3126455899fe8e66edda7ad2f34549,PodSandboxId:db99e22b51abfec93a03914450e8fffa5c2401d35d4ab38f960db45c5aa5b8b2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721174803009552389,Labels:map[string]
string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jqn6l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a87259b4-9d7e-472c-ad5d-cdac88b8d5b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1733cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efec1a7218ba11240305e59bd2e782259b8e3a954de33c9df97a35cd263fb1d9,PodSandboxId:bdb0aa9ddb0c9077ce0f0d44e959d3d476e368ead77207ca24b15dc0f99f8653,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:17
21174802518354462,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fhfp2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50aa8db6-2541-4fd4-85b7-e6894fe54ae0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a8bde3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354b0940f3ebdf913ebdb3f69e24dab26c45d80e7b300db4f838bbb2a6a84e0,PodSandboxId:749c2c994691e6ce06667302acac77001b8f7655df5a3480ac6078efbd0fc599,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf2
6a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721174800132888251,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-dz45b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ba263468-5fa1-4873-a77c-8a7e8c823342,},Annotations:map[string]string{io.kubernetes.container.hash: a97e0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b924521f0efc85318dd15de892045d5cdfeed64a916871904e3aa5a54dd082ff,PodSandboxId:825f2231b6de9c3036ee45c8c9d2229d8a35eae7c247c28138cd8bba2c7b9592,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898c
ff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721174784104359949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-h6wwn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4e01edaf-fd5a-4055-adc7-3814ccc74e83,},Annotations:map[string]string{io.kubernetes.container.hash: 3582c1c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98,PodSandboxId:347ecc25291eb328e696b3b1b011705fda8af3ca4c8febe3af8c56f7475081ae,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c8
9de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721174761108799539,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-zq4m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 332284a0-4c05-4737-8669-c71012684bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d41a249,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762,PodSandboxId:40a89e8b774b2eaf3dcdb95c1e983163d964f50668976d4888e995015a9e298c,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721174757446145150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71073df2-0967-430a-94e9-5a3641c16eed,},Annotations:map[string]string{io.kubernetes.container.hash: babda854,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff,PodSandboxId:59f13eb22ca98f3e40c185c2cffb4fdee151409a08a25f26ea8c7256b8cc7f95,Metadata:&ContainerMetadata{Name:coredns,Att
empt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721174753977793670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x569p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e4c6914-ede3-4b0b-b696-83768c15f61f,},Annotations:map[string]string{io.kubernetes.container.hash: e05fb4eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:90a0b8d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6,PodSandboxId:0775f25ceaca034873b1f2ad4ad7d9c5182c41cc593a5d3f5a13cd4f51e10923,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721174750775514432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kwx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bc49e4-c111-4184-83f6-14800ece6dc1,},Annotations:map[string]string{io.kubernetes.container.hash: c3480f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6262ffd56c7a125e22a281b
77eeaa64a1290bd2861165394c264dba8c5696f,PodSandboxId:8ff22cb5467bd2c46084782c2ba9d24b711e1617234cda0ae434856e0366c202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721174729412037968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d60fd94d932d2ba8608f510ed5f190a,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4f
e5f1ce8b2687a2fc58e2f,PodSandboxId:e1ed3dab6c8e597298b8bd982950ce5eba8403cf7043c5825f919f68cf17712c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721174729407897443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9cd091645e319574d7f043d4df0944d,},Annotations:map[string]string{io.kubernetes.container.hash: 8aa49d05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9,PodSandboxId:f47fd7ea3b45
50df5e19d53a60c4abadc31d8ea21bd7cd329795fde4d861656f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721174729349060746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f96988fa3ac783d3dee6b95d6d3bfb5,},Annotations:map[string]string{io.kubernetes.container.hash: f347147a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8,PodSandboxId:3f3ff8d1f348a8df3b21eafbb8c99
59556d0bc13008539df19fdc49ba79dbb28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721174729240262165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94cfcc47ed48397882029d326991bf1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52716ea1-ccd9-461e-a6ab-9d953d547455 name=/runtime.v1.RuntimeS
ervice/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.400030943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46c05f32-39d3-4097-ab5e-2a4a00582d9d name=/runtime.v1.RuntimeService/Version
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.400125520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46c05f32-39d3-4097-ab5e-2a4a00582d9d name=/runtime.v1.RuntimeService/Version
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.401559271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6289e717-486e-4b32-8c6a-230d5babd10d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.402856230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721175021402829144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6289e717-486e-4b32-8c6a-230d5babd10d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.403299771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b6957f8-afab-4f72-a646-5c4f4b2c39f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.403370250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b6957f8-afab-4f72-a646-5c4f4b2c39f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.404078085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72cd5f094e299b8179e884eef96004efa244774d4294b711ff4bbc3af41a0c46,PodSandboxId:e74f7f74515b8a9ddbc3b6d06cd28a0dc55372b7b0a231fcb3a3787473b76523,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721175012544117643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-4hl58,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de2a9e7d-611b-4332-ba3c-d631603eed79,},Annotations:map[string]string{io.kubernetes.container.hash: f230b00f,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aeb9a53a78efb30ea9c9e8d2102c15f47adb5bb24e3e130a88b7b403dbae31,PodSandboxId:41d2b6c3006a33fc552d9fd5e4e865f8d467c62367c3b4e1ce2c7673ead0403b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721174871850489359,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19a96ab4-cd55-4419-b5a7-8b9e8823879f,},Annotations:map[string]string{io.kubernet
es.container.hash: 78f7281c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017f08795f68a8a748ca3978528da32e92543780a43c0a7bb490b2061d5dbed5,PodSandboxId:424a5c95da73b09ff2a452bfa53e0840802ebfc4ee27e19fddba405955f393f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721174848676635811,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-rw54z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 22484240-e20c-4ef5-a0da-50269ed47664,},Annotations:map[string]string{io.kubernetes.container.hash: 49b847eb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e,PodSandboxId:de67c66d118049e89e78e1921b5ce1cb66346dd480b01c8b20204723dbec2db6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721174831597632817,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q5sd8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36d0948a-8a19-4f23-b53e-3a648152fffb,},Annotations:map[string]string{io.kubernetes.container.hash: adcd5a98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93744428c129022203eafb305d53c6b3d3126455899fe8e66edda7ad2f34549,PodSandboxId:db99e22b51abfec93a03914450e8fffa5c2401d35d4ab38f960db45c5aa5b8b2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721174803009552389,Labels:map[string]
string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jqn6l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a87259b4-9d7e-472c-ad5d-cdac88b8d5b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1733cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efec1a7218ba11240305e59bd2e782259b8e3a954de33c9df97a35cd263fb1d9,PodSandboxId:bdb0aa9ddb0c9077ce0f0d44e959d3d476e368ead77207ca24b15dc0f99f8653,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:17
21174802518354462,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fhfp2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50aa8db6-2541-4fd4-85b7-e6894fe54ae0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a8bde3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354b0940f3ebdf913ebdb3f69e24dab26c45d80e7b300db4f838bbb2a6a84e0,PodSandboxId:749c2c994691e6ce06667302acac77001b8f7655df5a3480ac6078efbd0fc599,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf2
6a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721174800132888251,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-dz45b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ba263468-5fa1-4873-a77c-8a7e8c823342,},Annotations:map[string]string{io.kubernetes.container.hash: a97e0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b924521f0efc85318dd15de892045d5cdfeed64a916871904e3aa5a54dd082ff,PodSandboxId:825f2231b6de9c3036ee45c8c9d2229d8a35eae7c247c28138cd8bba2c7b9592,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898c
ff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721174784104359949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-h6wwn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4e01edaf-fd5a-4055-adc7-3814ccc74e83,},Annotations:map[string]string{io.kubernetes.container.hash: 3582c1c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98,PodSandboxId:347ecc25291eb328e696b3b1b011705fda8af3ca4c8febe3af8c56f7475081ae,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c8
9de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721174761108799539,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-zq4m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 332284a0-4c05-4737-8669-c71012684bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d41a249,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762,PodSandboxId:40a89e8b774b2eaf3dcdb95c1e983163d964f50668976d4888e995015a9e298c,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721174757446145150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71073df2-0967-430a-94e9-5a3641c16eed,},Annotations:map[string]string{io.kubernetes.container.hash: babda854,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff,PodSandboxId:59f13eb22ca98f3e40c185c2cffb4fdee151409a08a25f26ea8c7256b8cc7f95,Metadata:&ContainerMetadata{Name:coredns,Att
empt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721174753977793670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x569p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e4c6914-ede3-4b0b-b696-83768c15f61f,},Annotations:map[string]string{io.kubernetes.container.hash: e05fb4eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:90a0b8d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6,PodSandboxId:0775f25ceaca034873b1f2ad4ad7d9c5182c41cc593a5d3f5a13cd4f51e10923,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721174750775514432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kwx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bc49e4-c111-4184-83f6-14800ece6dc1,},Annotations:map[string]string{io.kubernetes.container.hash: c3480f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6262ffd56c7a125e22a281b
77eeaa64a1290bd2861165394c264dba8c5696f,PodSandboxId:8ff22cb5467bd2c46084782c2ba9d24b711e1617234cda0ae434856e0366c202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721174729412037968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d60fd94d932d2ba8608f510ed5f190a,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4f
e5f1ce8b2687a2fc58e2f,PodSandboxId:e1ed3dab6c8e597298b8bd982950ce5eba8403cf7043c5825f919f68cf17712c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721174729407897443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9cd091645e319574d7f043d4df0944d,},Annotations:map[string]string{io.kubernetes.container.hash: 8aa49d05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9,PodSandboxId:f47fd7ea3b45
50df5e19d53a60c4abadc31d8ea21bd7cd329795fde4d861656f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721174729349060746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f96988fa3ac783d3dee6b95d6d3bfb5,},Annotations:map[string]string{io.kubernetes.container.hash: f347147a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8,PodSandboxId:3f3ff8d1f348a8df3b21eafbb8c99
59556d0bc13008539df19fdc49ba79dbb28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721174729240262165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94cfcc47ed48397882029d326991bf1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b6957f8-afab-4f72-a646-5c4f4b2c39f8 name=/runtime.v1.RuntimeS
ervice/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.441230871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22bda207-f998-4a23-91d2-1117a534ca93 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.441330030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22bda207-f998-4a23-91d2-1117a534ca93 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.442659091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=297b799f-6bbe-48bc-b60f-527c627e6b6c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.444221993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721175021444185726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=297b799f-6bbe-48bc-b60f-527c627e6b6c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.444810358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a98d60b-586d-4b1a-a975-acce9e127b7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.445030383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a98d60b-586d-4b1a-a975-acce9e127b7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:10:21 addons-860537 crio[687]: time="2024-07-17 00:10:21.445434505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72cd5f094e299b8179e884eef96004efa244774d4294b711ff4bbc3af41a0c46,PodSandboxId:e74f7f74515b8a9ddbc3b6d06cd28a0dc55372b7b0a231fcb3a3787473b76523,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721175012544117643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-4hl58,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de2a9e7d-611b-4332-ba3c-d631603eed79,},Annotations:map[string]string{io.kubernetes.container.hash: f230b00f,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aeb9a53a78efb30ea9c9e8d2102c15f47adb5bb24e3e130a88b7b403dbae31,PodSandboxId:41d2b6c3006a33fc552d9fd5e4e865f8d467c62367c3b4e1ce2c7673ead0403b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721174871850489359,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19a96ab4-cd55-4419-b5a7-8b9e8823879f,},Annotations:map[string]string{io.kubernet
es.container.hash: 78f7281c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017f08795f68a8a748ca3978528da32e92543780a43c0a7bb490b2061d5dbed5,PodSandboxId:424a5c95da73b09ff2a452bfa53e0840802ebfc4ee27e19fddba405955f393f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721174848676635811,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-rw54z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 22484240-e20c-4ef5-a0da-50269ed47664,},Annotations:map[string]string{io.kubernetes.container.hash: 49b847eb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e,PodSandboxId:de67c66d118049e89e78e1921b5ce1cb66346dd480b01c8b20204723dbec2db6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721174831597632817,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q5sd8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36d0948a-8a19-4f23-b53e-3a648152fffb,},Annotations:map[string]string{io.kubernetes.container.hash: adcd5a98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93744428c129022203eafb305d53c6b3d3126455899fe8e66edda7ad2f34549,PodSandboxId:db99e22b51abfec93a03914450e8fffa5c2401d35d4ab38f960db45c5aa5b8b2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721174803009552389,Labels:map[string]
string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jqn6l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a87259b4-9d7e-472c-ad5d-cdac88b8d5b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1733cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efec1a7218ba11240305e59bd2e782259b8e3a954de33c9df97a35cd263fb1d9,PodSandboxId:bdb0aa9ddb0c9077ce0f0d44e959d3d476e368ead77207ca24b15dc0f99f8653,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:17
21174802518354462,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fhfp2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50aa8db6-2541-4fd4-85b7-e6894fe54ae0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a8bde3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354b0940f3ebdf913ebdb3f69e24dab26c45d80e7b300db4f838bbb2a6a84e0,PodSandboxId:749c2c994691e6ce06667302acac77001b8f7655df5a3480ac6078efbd0fc599,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf2
6a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721174800132888251,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-dz45b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ba263468-5fa1-4873-a77c-8a7e8c823342,},Annotations:map[string]string{io.kubernetes.container.hash: a97e0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b924521f0efc85318dd15de892045d5cdfeed64a916871904e3aa5a54dd082ff,PodSandboxId:825f2231b6de9c3036ee45c8c9d2229d8a35eae7c247c28138cd8bba2c7b9592,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898c
ff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721174784104359949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-h6wwn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4e01edaf-fd5a-4055-adc7-3814ccc74e83,},Annotations:map[string]string{io.kubernetes.container.hash: 3582c1c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98,PodSandboxId:347ecc25291eb328e696b3b1b011705fda8af3ca4c8febe3af8c56f7475081ae,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c8
9de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721174761108799539,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-zq4m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 332284a0-4c05-4737-8669-c71012684bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d41a249,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762,PodSandboxId:40a89e8b774b2eaf3dcdb95c1e983163d964f50668976d4888e995015a9e298c,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721174757446145150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71073df2-0967-430a-94e9-5a3641c16eed,},Annotations:map[string]string{io.kubernetes.container.hash: babda854,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff,PodSandboxId:59f13eb22ca98f3e40c185c2cffb4fdee151409a08a25f26ea8c7256b8cc7f95,Metadata:&ContainerMetadata{Name:coredns,Att
empt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721174753977793670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x569p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e4c6914-ede3-4b0b-b696-83768c15f61f,},Annotations:map[string]string{io.kubernetes.container.hash: e05fb4eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:90a0b8d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6,PodSandboxId:0775f25ceaca034873b1f2ad4ad7d9c5182c41cc593a5d3f5a13cd4f51e10923,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721174750775514432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kwx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bc49e4-c111-4184-83f6-14800ece6dc1,},Annotations:map[string]string{io.kubernetes.container.hash: c3480f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6262ffd56c7a125e22a281b
77eeaa64a1290bd2861165394c264dba8c5696f,PodSandboxId:8ff22cb5467bd2c46084782c2ba9d24b711e1617234cda0ae434856e0366c202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721174729412037968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d60fd94d932d2ba8608f510ed5f190a,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4f
e5f1ce8b2687a2fc58e2f,PodSandboxId:e1ed3dab6c8e597298b8bd982950ce5eba8403cf7043c5825f919f68cf17712c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721174729407897443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9cd091645e319574d7f043d4df0944d,},Annotations:map[string]string{io.kubernetes.container.hash: 8aa49d05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9,PodSandboxId:f47fd7ea3b45
50df5e19d53a60c4abadc31d8ea21bd7cd329795fde4d861656f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721174729349060746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f96988fa3ac783d3dee6b95d6d3bfb5,},Annotations:map[string]string{io.kubernetes.container.hash: f347147a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8,PodSandboxId:3f3ff8d1f348a8df3b21eafbb8c99
59556d0bc13008539df19fdc49ba79dbb28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721174729240262165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94cfcc47ed48397882029d326991bf1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a98d60b-586d-4b1a-a975-acce9e127b7f name=/runtime.v1.RuntimeS
ervice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	72cd5f094e299       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        8 seconds ago       Running             hello-world-app           0                   e74f7f74515b8       hello-world-app-6778b5fc9f-4hl58
	e4aeb9a53a78e       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   41d2b6c3006a3       nginx
	017f08795f68a       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   424a5c95da73b       headlamp-7867546754-rw54z
	0e72ac612a045       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   de67c66d11804       gcp-auth-5db96cd9b4-q5sd8
	d93744428c129       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             3 minutes ago       Exited              patch                     1                   db99e22b51abf       ingress-nginx-admission-patch-jqn6l
	efec1a7218ba1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   bdb0aa9ddb0c9       ingress-nginx-admission-create-fhfp2
	6354b0940f3eb       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   749c2c994691e       local-path-provisioner-8d985888d-dz45b
	b924521f0efc8       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              3 minutes ago       Running             yakd                      0                   825f2231b6de9       yakd-dashboard-799879c74f-h6wwn
	f6a0958232c4d       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   347ecc25291eb       metrics-server-c59844bb4-zq4m7
	614282a521d58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   40a89e8b774b2       storage-provisioner
	e9267303f1604       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   59f13eb22ca98       coredns-7db6d8ff4d-x569p
	90a0b8d487576       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                             4 minutes ago       Running             kube-proxy                0                   0775f25ceaca0       kube-proxy-6kwx2
	9e6262ffd56c7       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                             4 minutes ago       Running             kube-scheduler            0                   8ff22cb5467bd       kube-scheduler-addons-860537
	5b03f56d8b1d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   e1ed3dab6c8e5       etcd-addons-860537
	70759f229bbf2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                             4 minutes ago       Running             kube-apiserver            0                   f47fd7ea3b455       kube-apiserver-addons-860537
	a177722461d94       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                             4 minutes ago       Running             kube-controller-manager   0                   3f3ff8d1f348a       kube-controller-manager-addons-860537
	
	
	==> coredns [e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff] <==
	[INFO] 10.244.0.22:47384 - 59305 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0010512s
	[INFO] 10.244.0.22:58003 - 7769 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123783s
	[INFO] 10.244.0.22:51524 - 56012 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130288s
	[INFO] 10.244.0.22:33725 - 31098 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137565s
	[INFO] 10.244.0.22:53943 - 24036 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063995s
	[INFO] 10.244.0.22:53391 - 39142 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000971605s
	[INFO] 10.244.0.22:43098 - 59047 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001273392s
	[INFO] 10.244.0.26:37982 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000274064s
	[INFO] 10.244.0.26:44792 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127581s
	[INFO] 10.244.0.8:55787 - 4706 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000396096s
	[INFO] 10.244.0.8:55787 - 33126 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000580831s
	[INFO] 10.244.0.8:55912 - 23124 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000096631s
	[INFO] 10.244.0.8:55912 - 39511 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010171s
	[INFO] 10.244.0.8:54214 - 18215 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071519s
	[INFO] 10.244.0.8:54214 - 28961 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114459s
	[INFO] 10.244.0.8:56515 - 23268 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012098s
	[INFO] 10.244.0.8:56515 - 4582 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091374s
	[INFO] 10.244.0.8:46359 - 4922 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089402s
	[INFO] 10.244.0.8:46359 - 3383 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00008213s
	[INFO] 10.244.0.8:44763 - 43490 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051744s
	[INFO] 10.244.0.8:44763 - 29676 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114211s
	[INFO] 10.244.0.8:48501 - 21602 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050389s
	[INFO] 10.244.0.8:48501 - 25184 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005842s
	[INFO] 10.244.0.8:58290 - 18114 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000069139s
	[INFO] 10.244.0.8:58290 - 15557 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043566s
	
	
	==> describe nodes <==
	Name:               addons-860537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-860537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=addons-860537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_05_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-860537
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:05:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-860537
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:10:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:08:08 +0000   Wed, 17 Jul 2024 00:05:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:08:08 +0000   Wed, 17 Jul 2024 00:05:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:08:08 +0000   Wed, 17 Jul 2024 00:05:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:08:08 +0000   Wed, 17 Jul 2024 00:05:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    addons-860537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c671ebb5ea348aeab41add3caf066ee
	  System UUID:                5c671ebb-5ea3-48ae-ab41-add3caf066ee
	  Boot ID:                    cf2dd3c3-1cd2-4106-8254-8d19829cd428
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-4hl58          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-5db96cd9b4-q5sd8                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  headlamp                    headlamp-7867546754-rw54z                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  kube-system                 coredns-7db6d8ff4d-x569p                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m33s
	  kube-system                 etcd-addons-860537                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-apiserver-addons-860537              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-controller-manager-addons-860537     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-proxy-6kwx2                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-scheduler-addons-860537              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 metrics-server-c59844bb4-zq4m7            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m29s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  local-path-storage          local-path-provisioner-8d985888d-dz45b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  yakd-dashboard              yakd-dashboard-799879c74f-h6wwn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m29s  kube-proxy       
	  Normal  Starting                 4m47s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m47s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m47s  kubelet          Node addons-860537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s  kubelet          Node addons-860537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s  kubelet          Node addons-860537 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m46s  kubelet          Node addons-860537 status is now: NodeReady
	  Normal  RegisteredNode           4m33s  node-controller  Node addons-860537 event: Registered Node addons-860537 in Controller
	
	
	==> dmesg <==
	[  +0.096676] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.431754] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +0.116148] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.170952] kauditd_printk_skb: 105 callbacks suppressed
	[  +5.045436] kauditd_printk_skb: 126 callbacks suppressed
	[Jul17 00:06] kauditd_printk_skb: 101 callbacks suppressed
	[ +24.813034] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.347923] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.895175] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.966756] kauditd_printk_skb: 59 callbacks suppressed
	[Jul17 00:07] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.353377] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.166629] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.738567] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.050701] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.623863] kauditd_printk_skb: 33 callbacks suppressed
	[  +9.017464] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.854283] kauditd_printk_skb: 35 callbacks suppressed
	[ +14.652610] kauditd_printk_skb: 13 callbacks suppressed
	[Jul17 00:08] kauditd_printk_skb: 2 callbacks suppressed
	[ +23.727612] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.261115] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.360842] kauditd_printk_skb: 6 callbacks suppressed
	[Jul17 00:10] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.147405] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4fe5f1ce8b2687a2fc58e2f] <==
	{"level":"info","ts":"2024-07-17T00:06:46.208406Z","caller":"traceutil/trace.go:171","msg":"trace[1516546414] linearizableReadLoop","detail":"{readStateIndex:1069; appliedIndex:1068; }","duration":"452.061456ms","start":"2024-07-17T00:06:45.756327Z","end":"2024-07-17T00:06:46.208389Z","steps":["trace[1516546414] 'read index received'  (duration: 451.906328ms)","trace[1516546414] 'applied index is now lower than readState.Index'  (duration: 154.631µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:06:46.209279Z","caller":"traceutil/trace.go:171","msg":"trace[1498206063] transaction","detail":"{read_only:false; response_revision:1039; number_of_response:1; }","duration":"496.609715ms","start":"2024-07-17T00:06:45.712649Z","end":"2024-07-17T00:06:46.209259Z","steps":["trace[1498206063] 'process raft request'  (duration: 495.629827ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:06:46.209459Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:06:45.712632Z","time spent":"496.721639ms","remote":"127.0.0.1:34550","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-860537\" mod_revision:945 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-860537\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-860537\" > >"}
	{"level":"warn","ts":"2024-07-17T00:06:46.21042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"454.08225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14358"}
	{"level":"info","ts":"2024-07-17T00:06:46.210482Z","caller":"traceutil/trace.go:171","msg":"trace[1761662920] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1039; }","duration":"454.162715ms","start":"2024-07-17T00:06:45.756304Z","end":"2024-07-17T00:06:46.210467Z","steps":["trace[1761662920] 'agreement among raft nodes before linearized reading'  (duration: 454.022518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:06:46.210508Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:06:45.75629Z","time spent":"454.209743ms","remote":"127.0.0.1:34462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14382,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-07-17T00:06:46.211335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.35064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11309"}
	{"level":"info","ts":"2024-07-17T00:06:46.211391Z","caller":"traceutil/trace.go:171","msg":"trace[878590856] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1039; }","duration":"341.422863ms","start":"2024-07-17T00:06:45.869955Z","end":"2024-07-17T00:06:46.211378Z","steps":["trace[878590856] 'agreement among raft nodes before linearized reading'  (duration: 341.103114ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:06:46.211413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:06:45.869943Z","time spent":"341.464242ms","remote":"127.0.0.1:34462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11333,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-17T00:06:46.213947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.234246ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-jqn6l\" ","response":"range_response_count:1 size:4239"}
	{"level":"info","ts":"2024-07-17T00:06:46.214121Z","caller":"traceutil/trace.go:171","msg":"trace[1836413192] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-jqn6l; range_end:; response_count:1; response_revision:1039; }","duration":"147.722834ms","start":"2024-07-17T00:06:46.066387Z","end":"2024-07-17T00:06:46.21411Z","steps":["trace[1836413192] 'agreement among raft nodes before linearized reading'  (duration: 145.819051ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:06:46.214798Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.864432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85504"}
	{"level":"info","ts":"2024-07-17T00:06:46.214907Z","caller":"traceutil/trace.go:171","msg":"trace[1934117135] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1039; }","duration":"275.993426ms","start":"2024-07-17T00:06:45.9389Z","end":"2024-07-17T00:06:46.214894Z","steps":["trace[1934117135] 'agreement among raft nodes before linearized reading'  (duration: 273.621564ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:06.799109Z","caller":"traceutil/trace.go:171","msg":"trace[208696071] linearizableReadLoop","detail":"{readStateIndex:1163; appliedIndex:1162; }","duration":"219.199233ms","start":"2024-07-17T00:07:06.579817Z","end":"2024-07-17T00:07:06.799016Z","steps":["trace[208696071] 'read index received'  (duration: 218.585581ms)","trace[208696071] 'applied index is now lower than readState.Index'  (duration: 612.965µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:07:06.799941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.089401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-17T00:07:06.80006Z","caller":"traceutil/trace.go:171","msg":"trace[1668775163] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1128; }","duration":"220.271884ms","start":"2024-07-17T00:07:06.579773Z","end":"2024-07-17T00:07:06.800045Z","steps":["trace[1668775163] 'agreement among raft nodes before linearized reading'  (duration: 220.052825ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:06.800157Z","caller":"traceutil/trace.go:171","msg":"trace[1010560812] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"226.394688ms","start":"2024-07-17T00:07:06.573753Z","end":"2024-07-17T00:07:06.800148Z","steps":["trace[1010560812] 'process raft request'  (duration: 224.705939ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:09.288112Z","caller":"traceutil/trace.go:171","msg":"trace[1535830678] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1167; }","duration":"238.997396ms","start":"2024-07-17T00:07:09.049096Z","end":"2024-07-17T00:07:09.288093Z","steps":["trace[1535830678] 'read index received'  (duration: 234.521743ms)","trace[1535830678] 'applied index is now lower than readState.Index'  (duration: 4.474497ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:07:09.28835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.236552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-17T00:07:09.288403Z","caller":"traceutil/trace.go:171","msg":"trace[1869272989] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1132; }","duration":"239.319484ms","start":"2024-07-17T00:07:09.049071Z","end":"2024-07-17T00:07:09.28839Z","steps":["trace[1869272989] 'agreement among raft nodes before linearized reading'  (duration: 239.176445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:07:26.7323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.979813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-07-17T00:07:26.732381Z","caller":"traceutil/trace.go:171","msg":"trace[1432672864] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1312; }","duration":"103.099702ms","start":"2024-07-17T00:07:26.629262Z","end":"2024-07-17T00:07:26.732362Z","steps":["trace[1432672864] 'range keys from in-memory index tree'  (duration: 102.842956ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:08:33.678834Z","caller":"traceutil/trace.go:171","msg":"trace[248690361] linearizableReadLoop","detail":"{readStateIndex:1728; appliedIndex:1727; }","duration":"101.19658ms","start":"2024-07-17T00:08:33.577578Z","end":"2024-07-17T00:08:33.678774Z","steps":["trace[248690361] 'read index received'  (duration: 100.972451ms)","trace[248690361] 'applied index is now lower than readState.Index'  (duration: 223.075µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:08:33.679163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.515953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-snapshotter-leaderelection\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:08:33.67928Z","caller":"traceutil/trace.go:171","msg":"trace[1759932876] range","detail":"{range_begin:/registry/roles/kube-system/external-snapshotter-leaderelection; range_end:; response_count:0; response_revision:1665; }","duration":"101.704275ms","start":"2024-07-17T00:08:33.577552Z","end":"2024-07-17T00:08:33.679256Z","steps":["trace[1759932876] 'agreement among raft nodes before linearized reading'  (duration: 101.507072ms)"],"step_count":1}
	
	
	==> gcp-auth [0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e] <==
	2024/07/17 00:07:11 GCP Auth Webhook started!
	2024/07/17 00:07:12 Ready to marshal response ...
	2024/07/17 00:07:12 Ready to write response ...
	2024/07/17 00:07:12 Ready to marshal response ...
	2024/07/17 00:07:12 Ready to write response ...
	2024/07/17 00:07:21 Ready to marshal response ...
	2024/07/17 00:07:21 Ready to write response ...
	2024/07/17 00:07:22 Ready to marshal response ...
	2024/07/17 00:07:22 Ready to write response ...
	2024/07/17 00:07:22 Ready to marshal response ...
	2024/07/17 00:07:22 Ready to write response ...
	2024/07/17 00:07:22 Ready to marshal response ...
	2024/07/17 00:07:22 Ready to write response ...
	2024/07/17 00:07:22 Ready to marshal response ...
	2024/07/17 00:07:22 Ready to write response ...
	2024/07/17 00:07:25 Ready to marshal response ...
	2024/07/17 00:07:25 Ready to write response ...
	2024/07/17 00:07:48 Ready to marshal response ...
	2024/07/17 00:07:48 Ready to write response ...
	2024/07/17 00:07:57 Ready to marshal response ...
	2024/07/17 00:07:57 Ready to write response ...
	2024/07/17 00:08:19 Ready to marshal response ...
	2024/07/17 00:08:19 Ready to write response ...
	2024/07/17 00:10:11 Ready to marshal response ...
	2024/07/17 00:10:11 Ready to write response ...
	
	
	==> kernel <==
	 00:10:21 up 5 min,  0 users,  load average: 0.48, 1.17, 0.63
	Linux addons-860537 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9] <==
	W0717 00:07:12.697284       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 00:07:12.697413       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0717 00:07:12.698757       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.177.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.177.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.177.0:443: connect: connection refused
	E0717 00:07:12.703352       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.177.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.177.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.177.0:443: connect: connection refused
	I0717 00:07:12.840257       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 00:07:22.556039       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.80.170"}
	E0717 00:07:30.248726       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.251:8443->10.244.0.28:59496: read: connection reset by peer
	I0717 00:07:43.293564       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 00:07:44.320617       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 00:07:48.803149       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 00:07:48.996780       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.151.34"}
	I0717 00:08:10.976132       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 00:08:35.784373       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:08:35.784947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:08:35.812132       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:08:35.812202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:08:35.849729       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:08:35.849781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:08:35.862952       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:08:35.863015       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 00:08:36.851072       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:08:36.863364       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:08:36.891472       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 00:10:11.371996       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.85.133"}
	
	
	==> kube-controller-manager [a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8] <==
	E0717 00:08:55.713867       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:09:05.959563       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:05.960360       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:09:13.902197       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:13.902339       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:09:17.297231       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:17.297507       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:09:18.623972       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:18.624156       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:09:44.167950       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:44.168137       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:09:49.921956       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:49.922067       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:10:01.666172       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:01.666237       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:10:06.232752       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:06.232807       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 00:10:11.244065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="60.889321ms"
	I0717 00:10:11.270328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="26.148332ms"
	I0717 00:10:11.271978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="115.114µs"
	I0717 00:10:12.876483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="16.387296ms"
	I0717 00:10:12.876565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="34.494µs"
	I0717 00:10:13.475953       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0717 00:10:13.479417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="3.635µs"
	I0717 00:10:13.484369       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [90a0b8d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6] <==
	I0717 00:05:51.586931       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:05:51.626209       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.251"]
	I0717 00:05:51.711525       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:05:51.711577       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:05:51.711594       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:05:51.720196       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:05:51.720422       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:05:51.720513       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:05:51.722559       1 config.go:192] "Starting service config controller"
	I0717 00:05:51.722585       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:05:51.722608       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:05:51.722612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:05:51.723035       1 config.go:319] "Starting node config controller"
	I0717 00:05:51.723041       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:05:51.822772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:05:51.822832       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:05:51.823607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9e6262ffd56c7a125e22a281b77eeaa64a1290bd2861165394c264dba8c5696f] <==
	W0717 00:05:32.856926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:05:32.857039       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:05:32.862870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:05:32.862958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:05:32.948937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:32.948984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:32.978124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:05:32.978155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:05:33.024619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:05:33.024789       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:05:33.030384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:05:33.030508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:05:33.061017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:05:33.061150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:05:33.138270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:33.138331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:33.161237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:05:33.161342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:05:33.227026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:33.227129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:33.256373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:05:33.256418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:05:33.420513       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:05:33.421257       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:05:36.097807       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:10:11 addons-860537 kubelet[1285]: I0717 00:10:11.235620    1285 memory_manager.go:354] "RemoveStaleState removing state" podUID="961d65cb-7faf-4f3a-86ef-8916920fcba6" containerName="registry-proxy"
	Jul 17 00:10:11 addons-860537 kubelet[1285]: I0717 00:10:11.377762    1285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/de2a9e7d-611b-4332-ba3c-d631603eed79-gcp-creds\") pod \"hello-world-app-6778b5fc9f-4hl58\" (UID: \"de2a9e7d-611b-4332-ba3c-d631603eed79\") " pod="default/hello-world-app-6778b5fc9f-4hl58"
	Jul 17 00:10:11 addons-860537 kubelet[1285]: I0717 00:10:11.377817    1285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbkg5\" (UniqueName: \"kubernetes.io/projected/de2a9e7d-611b-4332-ba3c-d631603eed79-kube-api-access-sbkg5\") pod \"hello-world-app-6778b5fc9f-4hl58\" (UID: \"de2a9e7d-611b-4332-ba3c-d631603eed79\") " pod="default/hello-world-app-6778b5fc9f-4hl58"
	Jul 17 00:10:12 addons-860537 kubelet[1285]: I0717 00:10:12.489302    1285 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5777\" (UniqueName: \"kubernetes.io/projected/a772ebab-91ad-4da1-be93-836f7a6b65a9-kube-api-access-v5777\") pod \"a772ebab-91ad-4da1-be93-836f7a6b65a9\" (UID: \"a772ebab-91ad-4da1-be93-836f7a6b65a9\") "
	Jul 17 00:10:12 addons-860537 kubelet[1285]: I0717 00:10:12.495935    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a772ebab-91ad-4da1-be93-836f7a6b65a9-kube-api-access-v5777" (OuterVolumeSpecName: "kube-api-access-v5777") pod "a772ebab-91ad-4da1-be93-836f7a6b65a9" (UID: "a772ebab-91ad-4da1-be93-836f7a6b65a9"). InnerVolumeSpecName "kube-api-access-v5777". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:10:12 addons-860537 kubelet[1285]: I0717 00:10:12.590508    1285 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v5777\" (UniqueName: \"kubernetes.io/projected/a772ebab-91ad-4da1-be93-836f7a6b65a9-kube-api-access-v5777\") on node \"addons-860537\" DevicePath \"\""
	Jul 17 00:10:12 addons-860537 kubelet[1285]: I0717 00:10:12.825655    1285 scope.go:117] "RemoveContainer" containerID="2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44"
	Jul 17 00:10:12 addons-860537 kubelet[1285]: I0717 00:10:12.855461    1285 scope.go:117] "RemoveContainer" containerID="2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44"
	Jul 17 00:10:12 addons-860537 kubelet[1285]: E0717 00:10:12.856337    1285 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44\": container with ID starting with 2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44 not found: ID does not exist" containerID="2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44"
	Jul 17 00:10:12 addons-860537 kubelet[1285]: I0717 00:10:12.856371    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44"} err="failed to get container status \"2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44\": rpc error: code = NotFound desc = could not find container \"2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44\": container with ID starting with 2b5ae9e71d02dbbf82df1c9d01d115fb74f841094478df4fb7a86240a930ee44 not found: ID does not exist"
	Jul 17 00:10:12 addons-860537 kubelet[1285]: I0717 00:10:12.864199    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-4hl58" podStartSLOduration=1.163238846 podStartE2EDuration="1.864115653s" podCreationTimestamp="2024-07-17 00:10:11 +0000 UTC" firstStartedPulling="2024-07-17 00:10:11.831595894 +0000 UTC m=+277.231133873" lastFinishedPulling="2024-07-17 00:10:12.5324727 +0000 UTC m=+277.932010680" observedRunningTime="2024-07-17 00:10:12.862864438 +0000 UTC m=+278.262402436" watchObservedRunningTime="2024-07-17 00:10:12.864115653 +0000 UTC m=+278.263653651"
	Jul 17 00:10:14 addons-860537 kubelet[1285]: I0717 00:10:14.719279    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50aa8db6-2541-4fd4-85b7-e6894fe54ae0" path="/var/lib/kubelet/pods/50aa8db6-2541-4fd4-85b7-e6894fe54ae0/volumes"
	Jul 17 00:10:14 addons-860537 kubelet[1285]: I0717 00:10:14.720072    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a772ebab-91ad-4da1-be93-836f7a6b65a9" path="/var/lib/kubelet/pods/a772ebab-91ad-4da1-be93-836f7a6b65a9/volumes"
	Jul 17 00:10:14 addons-860537 kubelet[1285]: I0717 00:10:14.720522    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87259b4-9d7e-472c-ad5d-cdac88b8d5b8" path="/var/lib/kubelet/pods/a87259b4-9d7e-472c-ad5d-cdac88b8d5b8/volumes"
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.720252    1285 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/69063ca2-2bf1-4ab4-a22d-d60ecab85951-webhook-cert\") pod \"69063ca2-2bf1-4ab4-a22d-d60ecab85951\" (UID: \"69063ca2-2bf1-4ab4-a22d-d60ecab85951\") "
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.720295    1285 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5266n\" (UniqueName: \"kubernetes.io/projected/69063ca2-2bf1-4ab4-a22d-d60ecab85951-kube-api-access-5266n\") pod \"69063ca2-2bf1-4ab4-a22d-d60ecab85951\" (UID: \"69063ca2-2bf1-4ab4-a22d-d60ecab85951\") "
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.723908    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69063ca2-2bf1-4ab4-a22d-d60ecab85951-kube-api-access-5266n" (OuterVolumeSpecName: "kube-api-access-5266n") pod "69063ca2-2bf1-4ab4-a22d-d60ecab85951" (UID: "69063ca2-2bf1-4ab4-a22d-d60ecab85951"). InnerVolumeSpecName "kube-api-access-5266n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.723996    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69063ca2-2bf1-4ab4-a22d-d60ecab85951-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "69063ca2-2bf1-4ab4-a22d-d60ecab85951" (UID: "69063ca2-2bf1-4ab4-a22d-d60ecab85951"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.821076    1285 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/69063ca2-2bf1-4ab4-a22d-d60ecab85951-webhook-cert\") on node \"addons-860537\" DevicePath \"\""
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.821127    1285 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5266n\" (UniqueName: \"kubernetes.io/projected/69063ca2-2bf1-4ab4-a22d-d60ecab85951-kube-api-access-5266n\") on node \"addons-860537\" DevicePath \"\""
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.866605    1285 scope.go:117] "RemoveContainer" containerID="a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113"
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.897865    1285 scope.go:117] "RemoveContainer" containerID="a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113"
	Jul 17 00:10:16 addons-860537 kubelet[1285]: E0717 00:10:16.898360    1285 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113\": container with ID starting with a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113 not found: ID does not exist" containerID="a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113"
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.898411    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113"} err="failed to get container status \"a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113\": rpc error: code = NotFound desc = could not find container \"a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113\": container with ID starting with a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113 not found: ID does not exist"
	Jul 17 00:10:18 addons-860537 kubelet[1285]: I0717 00:10:18.716317    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69063ca2-2bf1-4ab4-a22d-d60ecab85951" path="/var/lib/kubelet/pods/69063ca2-2bf1-4ab4-a22d-d60ecab85951/volumes"
	
	
	==> storage-provisioner [614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762] <==
	I0717 00:05:58.949775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:05:59.057000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:05:59.057290       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:05:59.136110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:05:59.136282       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-860537_501617fd-6546-4595-b758-09f40858752a!
	I0717 00:05:59.136808       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbfc3767-c447-4add-919d-ab78363ddc31", APIVersion:"v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-860537_501617fd-6546-4595-b758-09f40858752a became leader
	I0717 00:05:59.243553       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-860537_501617fd-6546-4595-b758-09f40858752a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-860537 -n addons-860537
helpers_test.go:261: (dbg) Run:  kubectl --context addons-860537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (355.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.63419ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-zq4m7" [332284a0-4c05-4737-8669-c71012684bb2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006277946s
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (78.733709ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (72.649843ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **
2024/07/17 00:07:28 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:28 [DEBUG] GET http://192.168.39.251:5000: retrying in 2s (3 left)
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (75.053545ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **
2024/07/17 00:07:30 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:30 [DEBUG] GET http://192.168.39.251:5000: retrying in 4s (2 left)
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (64.616932ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-860537, age: 2m5.305949334s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (64.849703ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-860537, age: 2m13.78156058s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (70.08182ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x569p, age: 2m14.992413951s

                                                
                                                
** /stderr **
2024/07/17 00:08:05 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:05 [DEBUG] GET http://192.168.39.251:5000: retrying in 8s (1 left)
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (62.359479ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x569p, age: 2m48.490899811s

                                                
                                                
** /stderr **
2024/07/17 00:08:38 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:38 [DEBUG] GET http://192.168.39.251:5000: retrying in 8s (1 left)
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (70.335383ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x569p, age: 3m24.612068918s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (61.183717ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x569p, age: 3m51.784763322s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (68.163108ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x569p, age: 4m31.579484187s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (64.080094ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x569p, age: 5m53.809772678s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-860537 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-860537 top pods -n kube-system: exit status 1 (65.943153ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-x569p, age: 7m22.871729353s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-860537 -n addons-860537
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-860537 logs -n 25: (1.391618197s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-375038                                                                     | download-only-375038 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-407804                                                                     | download-only-407804 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-020346                                                                     | download-only-020346 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-375038                                                                     | download-only-375038 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-998982 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | binary-mirror-998982                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46519                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-998982                                                                     | binary-mirror-998982 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| addons  | disable dashboard -p                                                                        | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | addons-860537                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | addons-860537                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-860537 --wait=true                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | -p addons-860537                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-860537 ssh cat                                                                       | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | /opt/local-path-provisioner/pvc-52a7cdd9-a848-453e-a1d0-34493d73230f_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | -p addons-860537                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-860537 ip                                                                            | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | addons-860537                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC | 17 Jul 24 00:07 UTC |
	|         | addons-860537                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-860537 ssh curl -s                                                                   | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:07 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-860537 addons                                                                        | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-860537 addons                                                                        | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-860537 ip                                                                            | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-860537 addons disable                                                                | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:10 UTC | 17 Jul 24 00:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-860537 addons                                                                        | addons-860537        | jenkins | v1.33.1 | 17 Jul 24 00:13 UTC | 17 Jul 24 00:13 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:53.893456   20973 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:53.893708   20973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:53.893717   20973 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:53.893721   20973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:53.893902   20973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:04:53.894494   20973 out.go:298] Setting JSON to false
	I0717 00:04:53.895276   20973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2843,"bootTime":1721171851,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:53.895336   20973 start.go:139] virtualization: kvm guest
	I0717 00:04:53.897223   20973 out.go:177] * [addons-860537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:04:53.898526   20973 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:04:53.898529   20973 notify.go:220] Checking for updates...
	I0717 00:04:53.901049   20973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:53.902282   20973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:04:53.903540   20973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:53.904749   20973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:04:53.905896   20973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:04:53.907223   20973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:53.940046   20973 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 00:04:53.941354   20973 start.go:297] selected driver: kvm2
	I0717 00:04:53.941369   20973 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:04:53.941383   20973 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:04:53.942339   20973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:04:53.942424   20973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:04:53.957687   20973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:04:53.957770   20973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:53.958146   20973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:04:53.958183   20973 cni.go:84] Creating CNI manager for ""
	I0717 00:04:53.958195   20973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:04:53.958211   20973 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:53.958290   20973 start.go:340] cluster config:
	{Name:addons-860537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:53.958434   20973 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:04:53.960206   20973 out.go:177] * Starting "addons-860537" primary control-plane node in "addons-860537" cluster
	I0717 00:04:53.961486   20973 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:53.961525   20973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:53.961531   20973 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:53.961607   20973 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:04:53.961617   20973 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:04:53.961938   20973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/config.json ...
	I0717 00:04:53.961958   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/config.json: {Name:mke28f9d9ed27413202277398c0d4001e090b138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:04:53.962084   20973 start.go:360] acquireMachinesLock for addons-860537: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:04:53.962129   20973 start.go:364] duration metric: took 31.046µs to acquireMachinesLock for "addons-860537"
	I0717 00:04:53.962146   20973 start.go:93] Provisioning new machine with config: &{Name:addons-860537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:04:53.962203   20973 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 00:04:53.963941   20973 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 00:04:53.964080   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:04:53.964126   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:04:53.978544   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0717 00:04:53.979062   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:04:53.979594   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:04:53.979623   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:04:53.979964   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:04:53.980141   20973 main.go:141] libmachine: (addons-860537) Calling .GetMachineName
	I0717 00:04:53.980302   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:04:53.980449   20973 start.go:159] libmachine.API.Create for "addons-860537" (driver="kvm2")
	I0717 00:04:53.980473   20973 client.go:168] LocalClient.Create starting
	I0717 00:04:53.980507   20973 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 00:04:54.396858   20973 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 00:04:54.447426   20973 main.go:141] libmachine: Running pre-create checks...
	I0717 00:04:54.447452   20973 main.go:141] libmachine: (addons-860537) Calling .PreCreateCheck
	I0717 00:04:54.447997   20973 main.go:141] libmachine: (addons-860537) Calling .GetConfigRaw
	I0717 00:04:54.448590   20973 main.go:141] libmachine: Creating machine...
	I0717 00:04:54.448608   20973 main.go:141] libmachine: (addons-860537) Calling .Create
	I0717 00:04:54.448761   20973 main.go:141] libmachine: (addons-860537) Creating KVM machine...
	I0717 00:04:54.450023   20973 main.go:141] libmachine: (addons-860537) DBG | found existing default KVM network
	I0717 00:04:54.450780   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:54.450642   20994 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0717 00:04:54.450804   20973 main.go:141] libmachine: (addons-860537) DBG | created network xml: 
	I0717 00:04:54.450816   20973 main.go:141] libmachine: (addons-860537) DBG | <network>
	I0717 00:04:54.450825   20973 main.go:141] libmachine: (addons-860537) DBG |   <name>mk-addons-860537</name>
	I0717 00:04:54.450831   20973 main.go:141] libmachine: (addons-860537) DBG |   <dns enable='no'/>
	I0717 00:04:54.450835   20973 main.go:141] libmachine: (addons-860537) DBG |   
	I0717 00:04:54.450843   20973 main.go:141] libmachine: (addons-860537) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 00:04:54.450848   20973 main.go:141] libmachine: (addons-860537) DBG |     <dhcp>
	I0717 00:04:54.450854   20973 main.go:141] libmachine: (addons-860537) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 00:04:54.450859   20973 main.go:141] libmachine: (addons-860537) DBG |     </dhcp>
	I0717 00:04:54.450864   20973 main.go:141] libmachine: (addons-860537) DBG |   </ip>
	I0717 00:04:54.450871   20973 main.go:141] libmachine: (addons-860537) DBG |   
	I0717 00:04:54.450944   20973 main.go:141] libmachine: (addons-860537) DBG | </network>
	I0717 00:04:54.450976   20973 main.go:141] libmachine: (addons-860537) DBG | 
	I0717 00:04:54.456748   20973 main.go:141] libmachine: (addons-860537) DBG | trying to create private KVM network mk-addons-860537 192.168.39.0/24...
	I0717 00:04:54.525501   20973 main.go:141] libmachine: (addons-860537) DBG | private KVM network mk-addons-860537 192.168.39.0/24 created
	I0717 00:04:54.525535   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:54.525458   20994 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:54.525556   20973 main.go:141] libmachine: (addons-860537) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537 ...
	I0717 00:04:54.525575   20973 main.go:141] libmachine: (addons-860537) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:04:54.525652   20973 main.go:141] libmachine: (addons-860537) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 00:04:54.767036   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:54.766915   20994 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa...
	I0717 00:04:55.228897   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:55.228774   20994 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/addons-860537.rawdisk...
	I0717 00:04:55.228923   20973 main.go:141] libmachine: (addons-860537) DBG | Writing magic tar header
	I0717 00:04:55.228937   20973 main.go:141] libmachine: (addons-860537) DBG | Writing SSH key tar header
	I0717 00:04:55.228945   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:55.228887   20994 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537 ...
	I0717 00:04:55.229034   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537
	I0717 00:04:55.229070   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537 (perms=drwx------)
	I0717 00:04:55.229083   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:04:55.229094   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 00:04:55.229109   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:55.229122   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 00:04:55.229136   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:04:55.229154   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:04:55.229167   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 00:04:55.229182   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 00:04:55.229191   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:04:55.229200   20973 main.go:141] libmachine: (addons-860537) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:04:55.229205   20973 main.go:141] libmachine: (addons-860537) Creating domain...
	I0717 00:04:55.229214   20973 main.go:141] libmachine: (addons-860537) DBG | Checking permissions on dir: /home
	I0717 00:04:55.229228   20973 main.go:141] libmachine: (addons-860537) DBG | Skipping /home - not owner
	I0717 00:04:55.230263   20973 main.go:141] libmachine: (addons-860537) define libvirt domain using xml: 
	I0717 00:04:55.230295   20973 main.go:141] libmachine: (addons-860537) <domain type='kvm'>
	I0717 00:04:55.230306   20973 main.go:141] libmachine: (addons-860537)   <name>addons-860537</name>
	I0717 00:04:55.230312   20973 main.go:141] libmachine: (addons-860537)   <memory unit='MiB'>4000</memory>
	I0717 00:04:55.230320   20973 main.go:141] libmachine: (addons-860537)   <vcpu>2</vcpu>
	I0717 00:04:55.230327   20973 main.go:141] libmachine: (addons-860537)   <features>
	I0717 00:04:55.230335   20973 main.go:141] libmachine: (addons-860537)     <acpi/>
	I0717 00:04:55.230344   20973 main.go:141] libmachine: (addons-860537)     <apic/>
	I0717 00:04:55.230352   20973 main.go:141] libmachine: (addons-860537)     <pae/>
	I0717 00:04:55.230361   20973 main.go:141] libmachine: (addons-860537)     
	I0717 00:04:55.230369   20973 main.go:141] libmachine: (addons-860537)   </features>
	I0717 00:04:55.230383   20973 main.go:141] libmachine: (addons-860537)   <cpu mode='host-passthrough'>
	I0717 00:04:55.230391   20973 main.go:141] libmachine: (addons-860537)   
	I0717 00:04:55.230399   20973 main.go:141] libmachine: (addons-860537)   </cpu>
	I0717 00:04:55.230407   20973 main.go:141] libmachine: (addons-860537)   <os>
	I0717 00:04:55.230414   20973 main.go:141] libmachine: (addons-860537)     <type>hvm</type>
	I0717 00:04:55.230426   20973 main.go:141] libmachine: (addons-860537)     <boot dev='cdrom'/>
	I0717 00:04:55.230435   20973 main.go:141] libmachine: (addons-860537)     <boot dev='hd'/>
	I0717 00:04:55.230447   20973 main.go:141] libmachine: (addons-860537)     <bootmenu enable='no'/>
	I0717 00:04:55.230456   20973 main.go:141] libmachine: (addons-860537)   </os>
	I0717 00:04:55.230464   20973 main.go:141] libmachine: (addons-860537)   <devices>
	I0717 00:04:55.230478   20973 main.go:141] libmachine: (addons-860537)     <disk type='file' device='cdrom'>
	I0717 00:04:55.230495   20973 main.go:141] libmachine: (addons-860537)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/boot2docker.iso'/>
	I0717 00:04:55.230505   20973 main.go:141] libmachine: (addons-860537)       <target dev='hdc' bus='scsi'/>
	I0717 00:04:55.230513   20973 main.go:141] libmachine: (addons-860537)       <readonly/>
	I0717 00:04:55.230524   20973 main.go:141] libmachine: (addons-860537)     </disk>
	I0717 00:04:55.230538   20973 main.go:141] libmachine: (addons-860537)     <disk type='file' device='disk'>
	I0717 00:04:55.230554   20973 main.go:141] libmachine: (addons-860537)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:04:55.230569   20973 main.go:141] libmachine: (addons-860537)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/addons-860537.rawdisk'/>
	I0717 00:04:55.230580   20973 main.go:141] libmachine: (addons-860537)       <target dev='hda' bus='virtio'/>
	I0717 00:04:55.230588   20973 main.go:141] libmachine: (addons-860537)     </disk>
	I0717 00:04:55.230594   20973 main.go:141] libmachine: (addons-860537)     <interface type='network'>
	I0717 00:04:55.230605   20973 main.go:141] libmachine: (addons-860537)       <source network='mk-addons-860537'/>
	I0717 00:04:55.230620   20973 main.go:141] libmachine: (addons-860537)       <model type='virtio'/>
	I0717 00:04:55.230633   20973 main.go:141] libmachine: (addons-860537)     </interface>
	I0717 00:04:55.230643   20973 main.go:141] libmachine: (addons-860537)     <interface type='network'>
	I0717 00:04:55.230665   20973 main.go:141] libmachine: (addons-860537)       <source network='default'/>
	I0717 00:04:55.230675   20973 main.go:141] libmachine: (addons-860537)       <model type='virtio'/>
	I0717 00:04:55.230705   20973 main.go:141] libmachine: (addons-860537)     </interface>
	I0717 00:04:55.230726   20973 main.go:141] libmachine: (addons-860537)     <serial type='pty'>
	I0717 00:04:55.230735   20973 main.go:141] libmachine: (addons-860537)       <target port='0'/>
	I0717 00:04:55.230747   20973 main.go:141] libmachine: (addons-860537)     </serial>
	I0717 00:04:55.230759   20973 main.go:141] libmachine: (addons-860537)     <console type='pty'>
	I0717 00:04:55.230768   20973 main.go:141] libmachine: (addons-860537)       <target type='serial' port='0'/>
	I0717 00:04:55.230776   20973 main.go:141] libmachine: (addons-860537)     </console>
	I0717 00:04:55.230781   20973 main.go:141] libmachine: (addons-860537)     <rng model='virtio'>
	I0717 00:04:55.230788   20973 main.go:141] libmachine: (addons-860537)       <backend model='random'>/dev/random</backend>
	I0717 00:04:55.230793   20973 main.go:141] libmachine: (addons-860537)     </rng>
	I0717 00:04:55.230798   20973 main.go:141] libmachine: (addons-860537)     
	I0717 00:04:55.230810   20973 main.go:141] libmachine: (addons-860537)     
	I0717 00:04:55.230834   20973 main.go:141] libmachine: (addons-860537)   </devices>
	I0717 00:04:55.230851   20973 main.go:141] libmachine: (addons-860537) </domain>
	I0717 00:04:55.230865   20973 main.go:141] libmachine: (addons-860537) 
	I0717 00:04:55.236742   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:60:f7:22 in network default
	I0717 00:04:55.237381   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:55.237402   20973 main.go:141] libmachine: (addons-860537) Ensuring networks are active...
	I0717 00:04:55.238094   20973 main.go:141] libmachine: (addons-860537) Ensuring network default is active
	I0717 00:04:55.238406   20973 main.go:141] libmachine: (addons-860537) Ensuring network mk-addons-860537 is active
	I0717 00:04:55.238906   20973 main.go:141] libmachine: (addons-860537) Getting domain xml...
	I0717 00:04:55.239654   20973 main.go:141] libmachine: (addons-860537) Creating domain...
	I0717 00:04:56.643575   20973 main.go:141] libmachine: (addons-860537) Waiting to get IP...
	I0717 00:04:56.644319   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:56.644724   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:56.644764   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:56.644703   20994 retry.go:31] will retry after 258.934541ms: waiting for machine to come up
	I0717 00:04:56.905312   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:56.905759   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:56.905787   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:56.905721   20994 retry.go:31] will retry after 290.950508ms: waiting for machine to come up
	I0717 00:04:57.198168   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:57.198554   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:57.198582   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:57.198510   20994 retry.go:31] will retry after 392.511162ms: waiting for machine to come up
	I0717 00:04:57.593008   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:57.593478   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:57.593507   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:57.593427   20994 retry.go:31] will retry after 536.216901ms: waiting for machine to come up
	I0717 00:04:58.131098   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:58.131550   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:58.131573   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:58.131500   20994 retry.go:31] will retry after 486.129485ms: waiting for machine to come up
	I0717 00:04:58.619211   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:58.619623   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:58.619650   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:58.619574   20994 retry.go:31] will retry after 643.494017ms: waiting for machine to come up
	I0717 00:04:59.265036   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:04:59.265495   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:04:59.265523   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:04:59.265457   20994 retry.go:31] will retry after 750.648926ms: waiting for machine to come up
	I0717 00:05:00.017338   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:00.017711   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:00.017750   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:00.017670   20994 retry.go:31] will retry after 1.031561955s: waiting for machine to come up
	I0717 00:05:01.050504   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:01.050994   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:01.051023   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:01.050924   20994 retry.go:31] will retry after 1.467936025s: waiting for machine to come up
	I0717 00:05:02.519944   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:02.520329   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:02.520350   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:02.520300   20994 retry.go:31] will retry after 1.680538008s: waiting for machine to come up
	I0717 00:05:04.202850   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:04.203293   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:04.203330   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:04.203259   20994 retry.go:31] will retry after 2.183867343s: waiting for machine to come up
	I0717 00:05:06.388764   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:06.389189   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:06.389212   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:06.389150   20994 retry.go:31] will retry after 2.378398435s: waiting for machine to come up
	I0717 00:05:08.770797   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:08.771325   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:08.771343   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:08.771294   20994 retry.go:31] will retry after 3.027010323s: waiting for machine to come up
	I0717 00:05:11.802107   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:11.802574   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find current IP address of domain addons-860537 in network mk-addons-860537
	I0717 00:05:11.802602   20973 main.go:141] libmachine: (addons-860537) DBG | I0717 00:05:11.802523   20994 retry.go:31] will retry after 3.456497207s: waiting for machine to come up
	I0717 00:05:15.260945   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.261431   20973 main.go:141] libmachine: (addons-860537) Found IP for machine: 192.168.39.251
	I0717 00:05:15.261456   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has current primary IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.261465   20973 main.go:141] libmachine: (addons-860537) Reserving static IP address...
	I0717 00:05:15.261901   20973 main.go:141] libmachine: (addons-860537) DBG | unable to find host DHCP lease matching {name: "addons-860537", mac: "52:54:00:fb:b6:26", ip: "192.168.39.251"} in network mk-addons-860537
	I0717 00:05:15.335142   20973 main.go:141] libmachine: (addons-860537) DBG | Getting to WaitForSSH function...
	I0717 00:05:15.335171   20973 main.go:141] libmachine: (addons-860537) Reserved static IP address: 192.168.39.251
	I0717 00:05:15.335193   20973 main.go:141] libmachine: (addons-860537) Waiting for SSH to be available...
	I0717 00:05:15.337627   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.338007   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.338036   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.338241   20973 main.go:141] libmachine: (addons-860537) DBG | Using SSH client type: external
	I0717 00:05:15.338263   20973 main.go:141] libmachine: (addons-860537) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa (-rw-------)
	I0717 00:05:15.338303   20973 main.go:141] libmachine: (addons-860537) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:05:15.338314   20973 main.go:141] libmachine: (addons-860537) DBG | About to run SSH command:
	I0717 00:05:15.338325   20973 main.go:141] libmachine: (addons-860537) DBG | exit 0
	I0717 00:05:15.477154   20973 main.go:141] libmachine: (addons-860537) DBG | SSH cmd err, output: <nil>: 
	I0717 00:05:15.477444   20973 main.go:141] libmachine: (addons-860537) KVM machine creation complete!
	I0717 00:05:15.477716   20973 main.go:141] libmachine: (addons-860537) Calling .GetConfigRaw
	I0717 00:05:15.478279   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:15.478482   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:15.478628   20973 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:05:15.478643   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:15.480235   20973 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:05:15.480249   20973 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:05:15.480259   20973 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:05:15.480265   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.482863   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.483330   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.483361   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.483501   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:15.483690   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.483851   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.484025   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:15.484184   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:15.484364   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:15.484376   20973 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:05:15.599935   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:05:15.599965   20973 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:05:15.599975   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.602725   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.603097   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.603127   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.603313   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:15.603523   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.603719   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.603827   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:15.604030   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:15.604230   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:15.604242   20973 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:05:15.721476   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:05:15.721559   20973 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:05:15.721570   20973 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:05:15.721576   20973 main.go:141] libmachine: (addons-860537) Calling .GetMachineName
	I0717 00:05:15.721800   20973 buildroot.go:166] provisioning hostname "addons-860537"
	I0717 00:05:15.721829   20973 main.go:141] libmachine: (addons-860537) Calling .GetMachineName
	I0717 00:05:15.722011   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.724258   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.724614   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.724634   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.724808   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:15.724998   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.725147   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.725267   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:15.725403   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:15.725598   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:15.725616   20973 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-860537 && echo "addons-860537" | sudo tee /etc/hostname
	I0717 00:05:15.857577   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-860537
	
	I0717 00:05:15.857604   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.860046   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.860371   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.860397   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.860582   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:15.860780   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.860919   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:15.861033   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:15.861182   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:15.861338   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:15.861353   20973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-860537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-860537/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-860537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:05:15.994136   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:05:15.994166   20973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:05:15.994193   20973 buildroot.go:174] setting up certificates
	I0717 00:05:15.994206   20973 provision.go:84] configureAuth start
	I0717 00:05:15.994215   20973 main.go:141] libmachine: (addons-860537) Calling .GetMachineName
	I0717 00:05:15.994482   20973 main.go:141] libmachine: (addons-860537) Calling .GetIP
	I0717 00:05:15.997035   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.997382   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:15.997406   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:15.997620   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:15.999815   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.000166   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.000199   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.000335   20973 provision.go:143] copyHostCerts
	I0717 00:05:16.000429   20973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:05:16.000599   20973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:05:16.000688   20973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:05:16.000754   20973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.addons-860537 san=[127.0.0.1 192.168.39.251 addons-860537 localhost minikube]
	I0717 00:05:16.206355   20973 provision.go:177] copyRemoteCerts
	I0717 00:05:16.206422   20973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:05:16.206450   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:16.209390   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.209827   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.209851   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.210106   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:16.210335   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.210500   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:16.210669   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:16.299799   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:05:16.325505   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:05:16.352381   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:05:16.383320   20973 provision.go:87] duration metric: took 389.100847ms to configureAuth
	I0717 00:05:16.383352   20973 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:05:16.383544   20973 config.go:182] Loaded profile config "addons-860537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:16.383627   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:16.386526   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.386851   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.386880   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.387088   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:16.387331   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.387500   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.387669   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:16.387846   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:16.388042   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:16.388057   20973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:05:16.739279   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:05:16.739306   20973 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:05:16.739316   20973 main.go:141] libmachine: (addons-860537) Calling .GetURL
	I0717 00:05:16.740610   20973 main.go:141] libmachine: (addons-860537) DBG | Using libvirt version 6000000
	I0717 00:05:16.742872   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.743185   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.743212   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.743352   20973 main.go:141] libmachine: Docker is up and running!
	I0717 00:05:16.743369   20973 main.go:141] libmachine: Reticulating splines...
	I0717 00:05:16.743382   20973 client.go:171] duration metric: took 22.762896786s to LocalClient.Create
	I0717 00:05:16.743407   20973 start.go:167] duration metric: took 22.762959111s to libmachine.API.Create "addons-860537"
	I0717 00:05:16.743417   20973 start.go:293] postStartSetup for "addons-860537" (driver="kvm2")
	I0717 00:05:16.743427   20973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:05:16.743444   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:16.743693   20973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:05:16.743716   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:16.745943   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.746222   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.746243   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.746385   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:16.746569   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.746727   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:16.746877   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:16.843604   20973 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:05:16.849203   20973 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:05:16.849274   20973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:05:16.849445   20973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:05:16.849511   20973 start.go:296] duration metric: took 106.08635ms for postStartSetup
	I0717 00:05:16.849552   20973 main.go:141] libmachine: (addons-860537) Calling .GetConfigRaw
	I0717 00:05:16.881903   20973 main.go:141] libmachine: (addons-860537) Calling .GetIP
	I0717 00:05:16.884588   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.884896   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.884939   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.885171   20973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/config.json ...
	I0717 00:05:16.885509   20973 start.go:128] duration metric: took 22.923294718s to createHost
	I0717 00:05:16.885543   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:16.887940   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.888331   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:16.888354   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:16.888589   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:16.888802   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.888976   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:16.889124   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:16.889286   20973 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:16.889487   20973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0717 00:05:16.889501   20973 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:05:17.014902   20973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721174716.984670982
	
	I0717 00:05:17.014929   20973 fix.go:216] guest clock: 1721174716.984670982
	I0717 00:05:17.014938   20973 fix.go:229] Guest: 2024-07-17 00:05:16.984670982 +0000 UTC Remote: 2024-07-17 00:05:16.885527734 +0000 UTC m=+23.025858730 (delta=99.143248ms)
	I0717 00:05:17.014987   20973 fix.go:200] guest clock delta is within tolerance: 99.143248ms
	I0717 00:05:17.014994   20973 start.go:83] releasing machines lock for "addons-860537", held for 23.052855063s
	I0717 00:05:17.015017   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:17.015304   20973 main.go:141] libmachine: (addons-860537) Calling .GetIP
	I0717 00:05:17.018066   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.018522   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:17.018551   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.018674   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:17.019285   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:17.019463   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:17.019545   20973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:05:17.019593   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:17.019728   20973 ssh_runner.go:195] Run: cat /version.json
	I0717 00:05:17.019751   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:17.022637   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.022938   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.022981   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:17.022998   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.023164   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:17.023274   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:17.023327   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:17.023339   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:17.023511   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:17.023567   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:17.023714   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:17.023728   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:17.023902   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:17.024036   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:17.106071   20973 ssh_runner.go:195] Run: systemctl --version
	I0717 00:05:17.136430   20973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:05:17.476801   20973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:05:17.483890   20973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:05:17.483968   20973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:05:17.501021   20973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:05:17.501049   20973 start.go:495] detecting cgroup driver to use...
	I0717 00:05:17.501182   20973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:05:17.517644   20973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:05:17.533803   20973 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:05:17.533866   20973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:05:17.548763   20973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:05:17.563593   20973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:05:17.678436   20973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:05:17.839660   20973 docker.go:233] disabling docker service ...
	I0717 00:05:17.839733   20973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:05:17.854747   20973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:05:17.868435   20973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:05:17.997646   20973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:05:18.120026   20973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:05:18.134485   20973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:05:18.154165   20973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:05:18.154233   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.165599   20973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:05:18.165651   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.176888   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.188288   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.200053   20973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:05:18.211625   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.222943   20973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.241750   20973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:18.252983   20973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:05:18.263723   20973 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:05:18.263770   20973 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:05:18.277816   20973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:05:18.288297   20973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:18.410820   20973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:05:18.552182   20973 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:05:18.552272   20973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:05:18.557056   20973 start.go:563] Will wait 60s for crictl version
	I0717 00:05:18.557125   20973 ssh_runner.go:195] Run: which crictl
	I0717 00:05:18.560934   20973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:05:18.602512   20973 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:05:18.602644   20973 ssh_runner.go:195] Run: crio --version
	I0717 00:05:18.633032   20973 ssh_runner.go:195] Run: crio --version
	I0717 00:05:18.670310   20973 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:05:18.671415   20973 main.go:141] libmachine: (addons-860537) Calling .GetIP
	I0717 00:05:18.673770   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:18.674125   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:18.674151   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:18.674328   20973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:05:18.678889   20973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:05:18.692339   20973 kubeadm.go:883] updating cluster {Name:addons-860537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:05:18.692446   20973 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:18.692486   20973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:05:18.726967   20973 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 00:05:18.727037   20973 ssh_runner.go:195] Run: which lz4
	I0717 00:05:18.731110   20973 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 00:05:18.735566   20973 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 00:05:18.735601   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 00:05:20.108232   20973 crio.go:462] duration metric: took 1.377149319s to copy over tarball
	I0717 00:05:20.108291   20973 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 00:05:22.518305   20973 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.40998787s)
	I0717 00:05:22.518331   20973 crio.go:469] duration metric: took 2.410075699s to extract the tarball
	I0717 00:05:22.518338   20973 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 00:05:22.556149   20973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:05:22.599182   20973 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:05:22.599202   20973 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:05:22.599218   20973 kubeadm.go:934] updating node { 192.168.39.251 8443 v1.30.2 crio true true} ...
	I0717 00:05:22.599344   20973 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-860537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:05:22.599436   20973 ssh_runner.go:195] Run: crio config
	I0717 00:05:22.657652   20973 cni.go:84] Creating CNI manager for ""
	I0717 00:05:22.657671   20973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:05:22.657681   20973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:05:22.657700   20973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.251 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-860537 NodeName:addons-860537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:05:22.657877   20973 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-860537"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:05:22.657956   20973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:05:22.669211   20973 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:05:22.669281   20973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:05:22.680352   20973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 00:05:22.698398   20973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:05:22.716983   20973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 00:05:22.735225   20973 ssh_runner.go:195] Run: grep 192.168.39.251	control-plane.minikube.internal$ /etc/hosts
	I0717 00:05:22.739595   20973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:05:22.753033   20973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:22.879307   20973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:05:22.902008   20973 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537 for IP: 192.168.39.251
	I0717 00:05:22.902034   20973 certs.go:194] generating shared ca certs ...
	I0717 00:05:22.902054   20973 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:22.902214   20973 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:05:22.968659   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt ...
	I0717 00:05:22.968685   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt: {Name:mkb7c35c1fe3bf75bf3e04708011446ecd5a1fcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:22.968846   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key ...
	I0717 00:05:22.968861   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key: {Name:mka3fe2df73604c22d5a52d9cb761bfc181c1060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:22.968961   20973 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:05:23.265092   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt ...
	I0717 00:05:23.265121   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt: {Name:mk3ed0d6da8881d88824249cab7761b1991364f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.265283   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key ...
	I0717 00:05:23.265293   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key: {Name:mk1f22362e41d60983244fa20bd685145d625754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.265358   20973 certs.go:256] generating profile certs ...
	I0717 00:05:23.265410   20973 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.key
	I0717 00:05:23.265424   20973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt with IP's: []
	I0717 00:05:23.553801   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt ...
	I0717 00:05:23.553827   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: {Name:mk26376f17f48227b1a5d85414766d77a530de49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.553972   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.key ...
	I0717 00:05:23.553983   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.key: {Name:mk58d4e284faecb0ba73852a57d5f096053a25e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.554052   20973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key.6221c5d3
	I0717 00:05:23.554069   20973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt.6221c5d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251]
	I0717 00:05:23.661857   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt.6221c5d3 ...
	I0717 00:05:23.661884   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt.6221c5d3: {Name:mkd09d1f2d8206034dd1c6a9032cfa8fd793256e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.662041   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key.6221c5d3 ...
	I0717 00:05:23.662055   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key.6221c5d3: {Name:mkb4d15842b2d89a1261df747d61a0afa14b0c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.662121   20973 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt.6221c5d3 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt
	I0717 00:05:23.662193   20973 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key.6221c5d3 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key
	I0717 00:05:23.662240   20973 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.key
	I0717 00:05:23.662253   20973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.crt with IP's: []
	I0717 00:05:23.957663   20973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.crt ...
	I0717 00:05:23.957688   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.crt: {Name:mk774f090fa46b32ce8968ba55230a177d7df948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.965032   20973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.key ...
	I0717 00:05:23.965055   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.key: {Name:mk2b818ab8fa9e51d9da1ce250e1da4098e5a12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:23.965301   20973 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:05:23.965340   20973 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:05:23.965367   20973 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:05:23.965398   20973 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:05:23.966102   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:05:23.996149   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:05:24.020622   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:05:24.043956   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:05:24.068159   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:05:24.093119   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:05:24.118776   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:05:24.146180   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:05:24.174146   20973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:05:24.198258   20973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:05:24.215186   20973 ssh_runner.go:195] Run: openssl version
	I0717 00:05:24.221129   20973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:05:24.232345   20973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:24.237158   20973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:24.237224   20973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:24.243267   20973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:05:24.254250   20973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:05:24.258354   20973 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:05:24.258416   20973 kubeadm.go:392] StartCluster: {Name:addons-860537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-860537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:24.258508   20973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:05:24.258560   20973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:05:24.302067   20973 cri.go:89] found id: ""
	I0717 00:05:24.302157   20973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:05:24.312619   20973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:05:24.322258   20973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:05:24.331834   20973 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:05:24.331858   20973 kubeadm.go:157] found existing configuration files:
	
	I0717 00:05:24.331906   20973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:05:24.340769   20973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:05:24.340834   20973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:05:24.349950   20973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:05:24.358750   20973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:05:24.358810   20973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:05:24.367676   20973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:05:24.376313   20973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:05:24.376363   20973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:05:24.385733   20973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:05:24.394723   20973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:05:24.394779   20973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:05:24.404003   20973 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 00:05:24.611986   20973 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:05:35.416833   20973 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:05:35.416947   20973 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:05:35.417043   20973 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:05:35.417141   20973 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:05:35.417267   20973 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:05:35.417333   20973 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:05:35.418906   20973 out.go:204]   - Generating certificates and keys ...
	I0717 00:05:35.418981   20973 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:05:35.419082   20973 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:05:35.419187   20973 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:05:35.419262   20973 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:05:35.419349   20973 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:05:35.419428   20973 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:05:35.419508   20973 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:05:35.419630   20973 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-860537 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I0717 00:05:35.419675   20973 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:05:35.419798   20973 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-860537 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I0717 00:05:35.419883   20973 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:05:35.419967   20973 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:05:35.420016   20973 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:05:35.420063   20973 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:05:35.420106   20973 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:05:35.420153   20973 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:05:35.420224   20973 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:05:35.420314   20973 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:05:35.420379   20973 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:05:35.420467   20973 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:05:35.420544   20973 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:05:35.421929   20973 out.go:204]   - Booting up control plane ...
	I0717 00:05:35.422038   20973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:05:35.422132   20973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:05:35.422208   20973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:05:35.422333   20973 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:05:35.422447   20973 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:05:35.422523   20973 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:05:35.422628   20973 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:05:35.422715   20973 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:05:35.422798   20973 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.586518ms
	I0717 00:05:35.422886   20973 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:05:35.422934   20973 kubeadm.go:310] [api-check] The API server is healthy after 5.50219884s
	I0717 00:05:35.423034   20973 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:05:35.423144   20973 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:05:35.423193   20973 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:05:35.423346   20973 kubeadm.go:310] [mark-control-plane] Marking the node addons-860537 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:05:35.423417   20973 kubeadm.go:310] [bootstrap-token] Using token: ti7zy9.5fookdc00rt06u2m
	I0717 00:05:35.425681   20973 out.go:204]   - Configuring RBAC rules ...
	I0717 00:05:35.425796   20973 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:05:35.425902   20973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:05:35.426064   20973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:05:35.426204   20973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:05:35.426362   20973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:05:35.426471   20973 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:05:35.426608   20973 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:05:35.426679   20973 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:05:35.426743   20973 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:05:35.426754   20973 kubeadm.go:310] 
	I0717 00:05:35.426830   20973 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:05:35.426841   20973 kubeadm.go:310] 
	I0717 00:05:35.426953   20973 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:05:35.426965   20973 kubeadm.go:310] 
	I0717 00:05:35.427018   20973 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:05:35.427098   20973 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:05:35.427176   20973 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:05:35.427193   20973 kubeadm.go:310] 
	I0717 00:05:35.427262   20973 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:05:35.427271   20973 kubeadm.go:310] 
	I0717 00:05:35.427342   20973 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:05:35.427356   20973 kubeadm.go:310] 
	I0717 00:05:35.427430   20973 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:05:35.427560   20973 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:05:35.427657   20973 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:05:35.427669   20973 kubeadm.go:310] 
	I0717 00:05:35.427780   20973 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:05:35.427889   20973 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:05:35.427908   20973 kubeadm.go:310] 
	I0717 00:05:35.428011   20973 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ti7zy9.5fookdc00rt06u2m \
	I0717 00:05:35.428132   20973 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 00:05:35.428174   20973 kubeadm.go:310] 	--control-plane 
	I0717 00:05:35.428186   20973 kubeadm.go:310] 
	I0717 00:05:35.428307   20973 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:05:35.428317   20973 kubeadm.go:310] 
	I0717 00:05:35.428422   20973 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ti7zy9.5fookdc00rt06u2m \
	I0717 00:05:35.428611   20973 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 00:05:35.428626   20973 cni.go:84] Creating CNI manager for ""
	I0717 00:05:35.428635   20973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:05:35.430735   20973 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 00:05:35.431978   20973 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 00:05:35.443102   20973 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 00:05:35.463128   20973 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:05:35.463193   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:35.463248   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-860537 minikube.k8s.io/updated_at=2024_07_17T00_05_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=addons-860537 minikube.k8s.io/primary=true
	I0717 00:05:35.497423   20973 ops.go:34] apiserver oom_adj: -16
	I0717 00:05:35.592128   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:36.092464   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:36.592777   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:37.092251   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:37.592795   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:38.092344   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:38.592236   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:39.093036   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:39.592280   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:40.093088   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:40.592163   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:41.092581   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:41.592992   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:42.093107   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:42.592421   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:43.092941   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:43.592907   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:44.093116   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:44.592480   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:45.093036   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:45.592615   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:46.092437   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:46.592996   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:47.092879   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:47.592484   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:48.093007   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:48.592482   20973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:48.687372   20973 kubeadm.go:1113] duration metric: took 13.224231507s to wait for elevateKubeSystemPrivileges
	I0717 00:05:48.687416   20973 kubeadm.go:394] duration metric: took 24.42900477s to StartCluster
	I0717 00:05:48.687440   20973 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:48.687580   20973 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:05:48.688043   20973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:48.688280   20973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:05:48.688303   20973 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:05:48.688355   20973 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:05:48.688454   20973 addons.go:69] Setting yakd=true in profile "addons-860537"
	I0717 00:05:48.688473   20973 addons.go:69] Setting inspektor-gadget=true in profile "addons-860537"
	I0717 00:05:48.688495   20973 addons.go:234] Setting addon yakd=true in "addons-860537"
	I0717 00:05:48.688501   20973 config.go:182] Loaded profile config "addons-860537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:48.688504   20973 addons.go:234] Setting addon inspektor-gadget=true in "addons-860537"
	I0717 00:05:48.688530   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.688533   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.688536   20973 addons.go:69] Setting volcano=true in profile "addons-860537"
	I0717 00:05:48.688497   20973 addons.go:69] Setting storage-provisioner=true in profile "addons-860537"
	I0717 00:05:48.688571   20973 addons.go:234] Setting addon volcano=true in "addons-860537"
	I0717 00:05:48.688564   20973 addons.go:69] Setting gcp-auth=true in profile "addons-860537"
	I0717 00:05:48.688594   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.688601   20973 addons.go:234] Setting addon storage-provisioner=true in "addons-860537"
	I0717 00:05:48.688603   20973 mustload.go:65] Loading cluster: addons-860537
	I0717 00:05:48.688649   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.688826   20973 config.go:182] Loaded profile config "addons-860537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:48.688994   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689020   20973 addons.go:69] Setting volumesnapshots=true in profile "addons-860537"
	I0717 00:05:48.689020   20973 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-860537"
	I0717 00:05:48.689030   20973 addons.go:69] Setting helm-tiller=true in profile "addons-860537"
	I0717 00:05:48.689039   20973 addons.go:234] Setting addon volumesnapshots=true in "addons-860537"
	I0717 00:05:48.689080   20973 addons.go:69] Setting registry=true in profile "addons-860537"
	I0717 00:05:48.689094   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689108   20973 addons.go:234] Setting addon registry=true in "addons-860537"
	I0717 00:05:48.689135   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.689159   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689230   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689042   20973 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-860537"
	I0717 00:05:48.689257   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689082   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689099   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.689601   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689021   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689047   20973 addons.go:69] Setting ingress=true in profile "addons-860537"
	I0717 00:05:48.689780   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.689054   20973 addons.go:69] Setting ingress-dns=true in profile "addons-860537"
	I0717 00:05:48.690278   20973 addons.go:234] Setting addon ingress-dns=true in "addons-860537"
	I0717 00:05:48.690373   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.689060   20973 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-860537"
	I0717 00:05:48.689064   20973 addons.go:69] Setting cloud-spanner=true in profile "addons-860537"
	I0717 00:05:48.690502   20973 addons.go:234] Setting addon cloud-spanner=true in "addons-860537"
	I0717 00:05:48.689624   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.690525   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.689065   20973 addons.go:234] Setting addon helm-tiller=true in "addons-860537"
	I0717 00:05:48.689063   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.690569   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.690573   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.691011   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.691046   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689070   20973 addons.go:69] Setting default-storageclass=true in profile "addons-860537"
	I0717 00:05:48.691869   20973 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-860537"
	I0717 00:05:48.692781   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.692979   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.692311   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.693066   20973 out.go:177] * Verifying Kubernetes components...
	I0717 00:05:48.693088   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689790   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.692813   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.689076   20973 addons.go:69] Setting metrics-server=true in profile "addons-860537"
	I0717 00:05:48.693506   20973 addons.go:234] Setting addon metrics-server=true in "addons-860537"
	I0717 00:05:48.693548   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.690140   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.690152   20973 addons.go:234] Setting addon ingress=true in "addons-860537"
	I0717 00:05:48.691749   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.691797   20973 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-860537"
	I0717 00:05:48.694381   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.694493   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.701796   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.691819   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.704789   20973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:48.689073   20973 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-860537"
	I0717 00:05:48.708312   20973 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-860537"
	I0717 00:05:48.708377   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.709015   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.709077   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.710634   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0717 00:05:48.711074   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.711800   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.711825   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.712198   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.712818   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.712855   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.713719   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43699
	I0717 00:05:48.714407   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.715006   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.715043   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.715398   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.716131   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.716213   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.717095   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.717141   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.717262   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0717 00:05:48.717776   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.717795   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.718130   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.718439   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.718471   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.729994   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33315
	I0717 00:05:48.730211   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42517
	I0717 00:05:48.730387   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.730401   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.730812   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.731359   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.731492   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.731522   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.731911   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.731940   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.731958   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.732023   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0717 00:05:48.732363   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.732366   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.732714   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.732729   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.733651   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.734072   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.734100   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.735707   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.736110   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.736130   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.737047   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.737064   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.737142   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0717 00:05:48.737323   20973 addons.go:234] Setting addon default-storageclass=true in "addons-860537"
	I0717 00:05:48.737361   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.737740   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.737741   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.737779   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.738292   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.738326   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.739644   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0717 00:05:48.740058   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.740641   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.740666   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.741112   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.741128   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.741698   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.741743   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.743924   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0717 00:05:48.744351   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.744879   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.744900   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.745028   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.745046   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.745276   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.745463   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.745495   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.746470   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.746509   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.747952   20973 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-860537"
	I0717 00:05:48.747990   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:48.748245   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.748291   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.759141   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0717 00:05:48.759647   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.760244   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.760269   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.760631   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.760815   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.763425   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36899
	I0717 00:05:48.763783   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.764284   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.764303   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.764662   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.764718   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.764971   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.766633   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.766772   20973 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:05:48.766935   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:48.766950   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:48.767092   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:48.767106   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:48.767114   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:48.767121   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:48.767395   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:48.767408   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 00:05:48.767498   20973 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 00:05:48.768111   20973 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:05:48.768126   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:05:48.768144   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.770941   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.771336   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.771357   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.771515   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.771719   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.771867   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.772048   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.774768   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0717 00:05:48.775390   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.775890   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.775903   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.776231   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.776871   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.776922   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.777805   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38363
	I0717 00:05:48.778226   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.778636   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.778657   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.779081   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.779641   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.779680   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.782428   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0717 00:05:48.782780   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.783199   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.783211   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.783539   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.783962   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.783976   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.786368   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
	I0717 00:05:48.786498   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0717 00:05:48.786669   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0717 00:05:48.786805   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.786808   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.787297   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.787305   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.787323   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.787323   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.787481   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.787702   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.787938   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.787999   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.788100   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.789158   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0717 00:05:48.789261   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0717 00:05:48.789439   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.789452   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.789816   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.790288   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.790301   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.790592   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.791095   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.791127   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.791576   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I0717 00:05:48.791691   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.792876   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.792896   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.793200   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.793296   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.793773   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.793791   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.793891   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.793933   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.794373   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.794913   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.794967   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.795257   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.795919   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0717 00:05:48.795926   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I0717 00:05:48.796046   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40057
	I0717 00:05:48.796536   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.796650   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.796747   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.796790   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.797224   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.797240   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.797262   20973 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 00:05:48.797305   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.797569   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.797585   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.798049   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.798100   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.798262   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.798635   20973 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:05:48.798784   20973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 00:05:48.798797   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 00:05:48.798814   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.799363   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0717 00:05:48.799560   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.799604   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.800115   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33359
	I0717 00:05:48.800136   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.800442   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.800961   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.800979   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.801375   20973 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:05:48.801470   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.801649   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.802193   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.802208   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.802257   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
	I0717 00:05:48.802620   20973 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:05:48.802641   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:05:48.802659   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.802670   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.802724   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.802725   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.802968   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.803169   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.803183   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.803269   20973 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:05:48.803300   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.803335   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.803532   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.803731   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.803862   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.803876   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.804238   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.804419   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.804519   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.804537   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.804868   20973 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:05:48.804885   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:05:48.804910   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.804972   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.805072   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:48.805103   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:48.805263   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.805465   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.805715   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.807377   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.807660   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.807962   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.808025   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.808063   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.808291   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.808594   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.808757   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.808867   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.809348   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.809458   20973 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:05:48.809774   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.809980   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.810066   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.810228   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.810404   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.810443   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:05:48.810561   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.811160   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:05:48.811177   20973 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:05:48.811194   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.812937   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:05:48.814121   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:05:48.814678   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.815181   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.815199   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.815343   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.815597   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.815740   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.815853   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.816978   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:05:48.818345   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 00:05:48.819709   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0717 00:05:48.819717   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:05:48.820267   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.820819   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.820835   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.821216   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.821517   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.822068   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 00:05:48.823332   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:05:48.823466   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.824666   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:05:48.824686   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:05:48.824707   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.825454   20973 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:05:48.826642   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:05:48.826660   20973 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:05:48.826680   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.827987   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.828362   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.828393   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.828638   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.828797   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.828949   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.828955   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0717 00:05:48.829077   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.829442   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0717 00:05:48.829744   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.830269   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.830292   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.830611   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.830649   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.830828   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.831092   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.831253   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.831258   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.831497   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.831520   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.831709   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.831860   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.832432   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.832452   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.832841   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.832889   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.833430   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.834950   20973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:05:48.835389   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.836641   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0717 00:05:48.837370   20973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:48.838793   20973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:48.838944   20973 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:05:48.839045   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0717 00:05:48.839163   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0717 00:05:48.839254   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.839784   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.839936   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.839963   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.840370   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.840395   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.840444   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.840482   20973 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:05:48.840503   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:05:48.840509   20973 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:05:48.840520   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.840524   20973 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:05:48.840626   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.840827   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.840878   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.840998   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.841493   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0717 00:05:48.842382   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.842692   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.843324   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.843721   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.843540   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.843782   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.844096   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.844160   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.844192   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.844690   20973 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:05:48.844707   20973 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:05:48.844716   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.844724   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.845409   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.845435   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0717 00:05:48.845446   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.845467   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.845477   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.845531   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.845962   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.845991   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.845972   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:48.846025   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.846187   20973 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:05:48.846526   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.846586   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.846604   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:48.846619   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:48.846690   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.846835   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.846870   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.846962   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.847032   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.847333   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:48.847369   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.847857   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:48.848092   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.848305   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.848899   20973 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:05:48.849341   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:48.849546   20973 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:05:48.849552   20973 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:05:48.849704   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.850179   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.850205   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.850497   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.850662   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.850710   20973 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:05:48.850723   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:05:48.850736   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.850838   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.850963   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.851334   20973 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:05:48.851337   20973 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:05:48.851466   20973 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:05:48.851482   20973 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:05:48.851492   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:05:48.851503   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.851484   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.852789   20973 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:05:48.852807   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:05:48.852823   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:48.855517   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.855702   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.856083   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.856122   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.856193   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.856215   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.856457   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.856457   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.856601   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.856788   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.856853   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.856898   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.857115   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.857124   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.857143   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.857166   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.857175   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.857410   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:48.857405   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.857421   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.857432   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:48.857447   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.857589   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:48.857806   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:48.857832   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.857989   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:48.858026   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:48.858155   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	W0717 00:05:48.858796   20973 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41888->192.168.39.251:22: read: connection reset by peer
	I0717 00:05:48.858824   20973 retry.go:31] will retry after 157.983593ms: ssh: handshake failed: read tcp 192.168.39.1:41888->192.168.39.251:22: read: connection reset by peer
	W0717 00:05:49.020376   20973 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41912->192.168.39.251:22: read: connection reset by peer
	I0717 00:05:49.020411   20973 retry.go:31] will retry after 328.328052ms: ssh: handshake failed: read tcp 192.168.39.1:41912->192.168.39.251:22: read: connection reset by peer
	I0717 00:05:49.154190   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:05:49.174373   20973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:05:49.174452   20973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:05:49.251082   20973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 00:05:49.251106   20973 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 00:05:49.283672   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:05:49.305146   20973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:05:49.305170   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:05:49.334802   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:05:49.334832   20973 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:05:49.336791   20973 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:05:49.336809   20973 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:05:49.338650   20973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:05:49.338664   20973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:05:49.340568   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:05:49.340587   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:05:49.363219   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:05:49.369498   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:05:49.379256   20973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:05:49.379282   20973 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 00:05:49.382569   20973 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:05:49.382589   20973 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:05:49.388619   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:05:49.426553   20973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:05:49.426583   20973 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:05:49.445865   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:05:49.464581   20973 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:05:49.464607   20973 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:05:49.525125   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:05:49.525156   20973 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:05:49.565480   20973 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:05:49.565504   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:05:49.573648   20973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:05:49.573680   20973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:05:49.585219   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:05:49.585249   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:05:49.588300   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:05:49.597519   20973 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:05:49.597539   20973 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:05:49.641378   20973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:05:49.641399   20973 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:05:49.700291   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:05:49.700319   20973 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:05:49.729450   20973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:05:49.729477   20973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:05:49.737087   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:05:49.738313   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:05:49.738327   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:05:49.759225   20973 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:05:49.759254   20973 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:05:49.814408   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:05:49.916567   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:05:49.932260   20973 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:05:49.932284   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:05:49.968232   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:05:49.968260   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:05:50.013627   20973 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:05:50.013647   20973 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:05:50.058203   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:05:50.058227   20973 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:05:50.213766   20973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:05:50.213792   20973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:05:50.245355   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:05:50.285878   20973 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:05:50.285903   20973 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:05:50.344246   20973 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:50.344278   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:05:50.362038   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:05:50.362065   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:05:50.425836   20973 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:05:50.425853   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:05:50.621765   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:50.635573   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:05:50.657228   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:05:50.657250   20973 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:05:50.822234   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:05:50.822263   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:05:50.885960   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:05:50.885983   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:05:51.060470   20973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:05:51.060499   20973 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:05:51.330535   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.176305812s)
	I0717 00:05:51.330581   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:51.330595   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:51.330638   20973 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.156184702s)
	I0717 00:05:51.330671   20973 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.156191366s)
	I0717 00:05:51.330686   20973 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 00:05:51.330867   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:51.330882   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:51.330895   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:51.330906   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:51.331630   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:51.331682   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:51.331634   20973 node_ready.go:35] waiting up to 6m0s for node "addons-860537" to be "Ready" ...
	I0717 00:05:51.331653   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:51.336267   20973 node_ready.go:49] node "addons-860537" has status "Ready":"True"
	I0717 00:05:51.336284   20973 node_ready.go:38] duration metric: took 4.577513ms for node "addons-860537" to be "Ready" ...
	I0717 00:05:51.336292   20973 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:05:51.345614   20973 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:51.574110   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:05:51.835993   20973 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-860537" context rescaled to 1 replicas
	I0717 00:05:53.655210   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.371501679s)
	I0717 00:05:53.655266   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:53.655274   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:53.655574   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:53.655598   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:53.655608   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:53.655616   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:53.655622   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:53.655862   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:53.655878   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:53.682419   20973 pod_ready.go:102] pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace has status "Ready":"False"
	I0717 00:05:53.713046   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.349784764s)
	I0717 00:05:53.713116   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:53.713132   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:53.713449   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:53.713478   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:53.713482   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:53.713492   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:53.713501   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:53.713735   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:53.713749   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:55.797574   20973 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:05:55.797616   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:55.800870   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:55.801356   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:55.801385   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:55.801555   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:55.801753   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:55.801893   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:55.802033   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:55.852120   20973 pod_ready.go:102] pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace has status "Ready":"False"
	I0717 00:05:56.185497   20973 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:05:56.355890   20973 addons.go:234] Setting addon gcp-auth=true in "addons-860537"
	I0717 00:05:56.355953   20973 host.go:66] Checking if "addons-860537" exists ...
	I0717 00:05:56.356410   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:56.356449   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:56.372343   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0717 00:05:56.372787   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:56.373351   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:56.373378   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:56.373769   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:56.374311   20973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:05:56.374335   20973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:05:56.388828   20973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0717 00:05:56.389226   20973 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:05:56.389729   20973 main.go:141] libmachine: Using API Version  1
	I0717 00:05:56.389747   20973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:05:56.390075   20973 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:05:56.390258   20973 main.go:141] libmachine: (addons-860537) Calling .GetState
	I0717 00:05:56.391864   20973 main.go:141] libmachine: (addons-860537) Calling .DriverName
	I0717 00:05:56.392088   20973 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:05:56.392111   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHHostname
	I0717 00:05:56.394673   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:56.395039   20973 main.go:141] libmachine: (addons-860537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:b6:26", ip: ""} in network mk-addons-860537: {Iface:virbr1 ExpiryTime:2024-07-17 01:05:09 +0000 UTC Type:0 Mac:52:54:00:fb:b6:26 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:addons-860537 Clientid:01:52:54:00:fb:b6:26}
	I0717 00:05:56.395065   20973 main.go:141] libmachine: (addons-860537) DBG | domain addons-860537 has defined IP address 192.168.39.251 and MAC address 52:54:00:fb:b6:26 in network mk-addons-860537
	I0717 00:05:56.395249   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHPort
	I0717 00:05:56.395427   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHKeyPath
	I0717 00:05:56.395569   20973 main.go:141] libmachine: (addons-860537) Calling .GetSSHUsername
	I0717 00:05:56.395681   20973 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/addons-860537/id_rsa Username:docker}
	I0717 00:05:56.883632   20973 pod_ready.go:92] pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:56.883663   20973 pod_ready.go:81] duration metric: took 5.538025473s for pod "coredns-7db6d8ff4d-b656z" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:56.883677   20973 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x569p" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:56.936885   20973 pod_ready.go:92] pod "coredns-7db6d8ff4d-x569p" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:56.936918   20973 pod_ready.go:81] duration metric: took 53.232285ms for pod "coredns-7db6d8ff4d-x569p" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:56.936933   20973 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.008504   20973 pod_ready.go:92] pod "etcd-addons-860537" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.008534   20973 pod_ready.go:81] duration metric: took 71.592091ms for pod "etcd-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.008547   20973 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.038261   20973 pod_ready.go:92] pod "kube-apiserver-addons-860537" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.038282   20973 pod_ready.go:81] duration metric: took 29.727649ms for pod "kube-apiserver-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.038292   20973 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.130106   20973 pod_ready.go:92] pod "kube-controller-manager-addons-860537" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.130144   20973 pod_ready.go:81] duration metric: took 91.844778ms for pod "kube-controller-manager-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.130159   20973 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6kwx2" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.275961   20973 pod_ready.go:92] pod "kube-proxy-6kwx2" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.275985   20973 pod_ready.go:81] duration metric: took 145.817601ms for pod "kube-proxy-6kwx2" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.275997   20973 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.663819   20973 pod_ready.go:92] pod "kube-scheduler-addons-860537" in "kube-system" namespace has status "Ready":"True"
	I0717 00:05:57.663847   20973 pod_ready.go:81] duration metric: took 387.842076ms for pod "kube-scheduler-addons-860537" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.663860   20973 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:57.759432   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.389899649s)
	I0717 00:05:57.759487   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759503   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759508   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.370859481s)
	I0717 00:05:57.759556   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759572   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759590   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.31369437s)
	I0717 00:05:57.759622   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759634   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759636   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.171302199s)
	I0717 00:05:57.759665   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759669   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.022556742s)
	I0717 00:05:57.759682   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759787   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.945334344s)
	I0717 00:05:57.759812   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759817   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.759826   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759858   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.759867   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.759876   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759884   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.759930   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.843339825s)
	I0717 00:05:57.759947   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.759955   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.760024   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.514642074s)
	I0717 00:05:57.760040   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.760050   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.760173   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.138377137s)
	W0717 00:05:57.760202   20973 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:05:57.760228   20973 retry.go:31] will retry after 357.546872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:05:57.760308   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.124693251s)
	I0717 00:05:57.760337   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.760376   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.760394   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.760398   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.760408   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.760419   20973 addons.go:475] Verifying addon ingress=true in "addons-860537"
	I0717 00:05:57.760427   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.760446   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.760475   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.760483   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.760492   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.760500   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.760617   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.760629   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.760638   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.760846   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.760858   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.761250   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.761290   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.761312   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.761325   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.761774   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.761824   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.761918   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.761965   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.761985   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.762022   20973 addons.go:475] Verifying addon metrics-server=true in "addons-860537"
	I0717 00:05:57.762203   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.762233   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.762240   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.762248   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.762254   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.763027   20973 out.go:177] * Verifying ingress addon...
	I0717 00:05:57.763209   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763228   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763228   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763241   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.763247   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763251   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.763260   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.763261   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763277   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763281   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.763252   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.763316   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.763319   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763323   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.763338   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763344   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.763350   20973 addons.go:475] Verifying addon registry=true in "addons-860537"
	I0717 00:05:57.763599   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.763633   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.763642   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764290   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.764307   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764469   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.764480   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.764487   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764495   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.764495   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.764502   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.764525   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.764532   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764539   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.764546   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.764666   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.764671   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.764680   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.764961   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.765017   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.765043   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.765441   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.766298   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:57.765685   20973 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:05:57.765727   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.768030   20973 out.go:177] * Verifying registry addon...
	I0717 00:05:57.768911   20973 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-860537 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:05:57.770465   20973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:05:57.789659   20973 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:05:57.789686   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:57.802267   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.802289   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.802601   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.802681   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.802713   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 00:05:57.802807   20973 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 00:05:57.808245   20973 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:05:57.808270   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:57.822493   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:57.822519   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:57.822763   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:57.822776   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:57.822783   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:58.118849   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:58.279958   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:58.287494   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:58.795339   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:58.795863   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.359001   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.379427   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:59.444182   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.870026514s)
	I0717 00:05:59.444242   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:59.444256   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:59.444297   20973 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.052184704s)
	I0717 00:05:59.444564   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:59.444586   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:59.444597   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:05:59.444606   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:05:59.444609   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:05:59.444870   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:05:59.444888   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:05:59.444903   20973 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-860537"
	I0717 00:05:59.446484   20973 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:05:59.446501   20973 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:05:59.448406   20973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:59.449231   20973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:05:59.450336   20973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:05:59.450354   20973 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:05:59.467443   20973 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:05:59.467474   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:59.630432   20973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:05:59.630469   20973 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:05:59.677617   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:05:59.712260   20973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:05:59.712288   20973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:05:59.775719   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.780707   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:59.834038   20973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:05:59.954951   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:00.271736   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:00.275545   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:00.343979   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.225082634s)
	I0717 00:06:00.344021   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:06:00.344033   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:06:00.344306   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:06:00.344328   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:06:00.344339   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:06:00.344348   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:06:00.344354   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:06:00.344588   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:06:00.344601   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:06:00.344622   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:06:00.454957   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:00.795541   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:00.832803   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:00.874657   20973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.040570421s)
	I0717 00:06:00.874711   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:06:00.874727   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:06:00.875011   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:06:00.875032   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:06:00.875041   20973 main.go:141] libmachine: Making call to close driver server
	I0717 00:06:00.875049   20973 main.go:141] libmachine: (addons-860537) Calling .Close
	I0717 00:06:00.875087   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:06:00.875276   20973 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:06:00.875348   20973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:06:00.875329   20973 main.go:141] libmachine: (addons-860537) DBG | Closing plugin on server side
	I0717 00:06:00.876644   20973 addons.go:475] Verifying addon gcp-auth=true in "addons-860537"
	I0717 00:06:00.878268   20973 out.go:177] * Verifying gcp-auth addon...
	I0717 00:06:00.880057   20973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:06:00.916212   20973 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:06:00.916237   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:00.987581   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:01.276954   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:01.281730   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:01.392885   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:01.455200   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:01.687479   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:01.780686   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:01.784290   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:01.891113   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:01.956285   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:02.271369   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:02.286313   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:02.385084   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:02.455158   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:02.770694   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:02.781678   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:02.900885   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:02.967230   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:03.270836   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.274520   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:03.382882   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:03.456330   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:03.771303   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.789189   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:03.884628   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:03.954903   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:04.169899   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:04.270501   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:04.274948   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:04.383864   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:04.454958   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:04.770914   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:04.774483   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:04.883547   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:04.955240   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:05.271098   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:05.275254   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:05.384152   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:05.455267   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:05.771201   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:05.775056   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:05.883968   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:05.956042   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:06.270522   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:06.273674   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:06.384325   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:06.455049   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:06.670726   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:06.770821   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:06.774443   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:06.883797   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:06.954494   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:07.270919   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:07.274486   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:07.695845   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:07.696510   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:07.771311   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:07.773973   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:07.883768   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:07.954633   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:08.271664   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:08.274784   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:08.385723   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:08.454960   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:08.771339   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:08.774316   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:08.883767   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:08.955536   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:09.170426   20973 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:09.271367   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.275638   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:09.384157   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:09.457412   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:09.856151   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.856365   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:09.884481   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:09.955625   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:10.272010   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:10.275464   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:10.383333   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:10.456595   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:10.670169   20973 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:10.670189   20973 pod_ready.go:81] duration metric: took 13.006321739s for pod "nvidia-device-plugin-daemonset-pcbjh" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:10.670196   20973 pod_ready.go:38] duration metric: took 19.333895971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:06:10.670209   20973 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:06:10.670263   20973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:06:10.701286   20973 api_server.go:72] duration metric: took 22.012949714s to wait for apiserver process to appear ...
	I0717 00:06:10.701307   20973 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:06:10.701324   20973 api_server.go:253] Checking apiserver healthz at https://192.168.39.251:8443/healthz ...
	I0717 00:06:10.705334   20973 api_server.go:279] https://192.168.39.251:8443/healthz returned 200:
	ok
	I0717 00:06:10.706257   20973 api_server.go:141] control plane version: v1.30.2
	I0717 00:06:10.706278   20973 api_server.go:131] duration metric: took 4.963458ms to wait for apiserver health ...
	I0717 00:06:10.706287   20973 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:06:10.716668   20973 system_pods.go:59] 18 kube-system pods found
	I0717 00:06:10.716695   20973 system_pods.go:61] "coredns-7db6d8ff4d-x569p" [1e4c6914-ede3-4b0b-b696-83768c15f61f] Running
	I0717 00:06:10.716703   20973 system_pods.go:61] "csi-hostpath-attacher-0" [1e997ac0-7c52-48b7-9a1a-bf461ba09162] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 00:06:10.716709   20973 system_pods.go:61] "csi-hostpath-resizer-0" [b4942844-db70-42e9-b530-db4bcfb28f68] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 00:06:10.716716   20973 system_pods.go:61] "csi-hostpathplugin-spxjk" [01553a53-f10f-43eb-8581-452ce918ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 00:06:10.716724   20973 system_pods.go:61] "etcd-addons-860537" [4ece71aa-f418-49c5-b9d6-328918a4520a] Running
	I0717 00:06:10.716731   20973 system_pods.go:61] "kube-apiserver-addons-860537" [2a014807-df86-4a41-bb77-45cdd720c9bc] Running
	I0717 00:06:10.716736   20973 system_pods.go:61] "kube-controller-manager-addons-860537" [c9390bef-106f-4c8f-b0c7-bdbb3cf6a3a7] Running
	I0717 00:06:10.716748   20973 system_pods.go:61] "kube-ingress-dns-minikube" [a772ebab-91ad-4da1-be93-836f7a6b65a9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 00:06:10.716754   20973 system_pods.go:61] "kube-proxy-6kwx2" [95bc49e4-c111-4184-83f6-14800ece6dc1] Running
	I0717 00:06:10.716761   20973 system_pods.go:61] "kube-scheduler-addons-860537" [f0353750-5ac0-464a-9f2c-1e926a5ba6dc] Running
	I0717 00:06:10.716768   20973 system_pods.go:61] "metrics-server-c59844bb4-zq4m7" [332284a0-4c05-4737-8669-c71012684bb2] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 00:06:10.716777   20973 system_pods.go:61] "nvidia-device-plugin-daemonset-pcbjh" [631d74e8-bdf2-43b3-b053-cdcade929069] Running
	I0717 00:06:10.716786   20973 system_pods.go:61] "registry-proxy-vpbzw" [961d65cb-7faf-4f3a-86ef-8916920fcba6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 00:06:10.716796   20973 system_pods.go:61] "registry-v6n4c" [66c9585d-752a-4ad2-9c99-b9bff568c44d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 00:06:10.716809   20973 system_pods.go:61] "snapshot-controller-745499f584-6fsd7" [a0ab2b73-f917-4c6c-95f8-d516cf54a3f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:06:10.716818   20973 system_pods.go:61] "snapshot-controller-745499f584-z8rr5" [49153b18-e1ad-4512-9ead-6a432b9e0c7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:06:10.716829   20973 system_pods.go:61] "storage-provisioner" [71073df2-0967-430a-94e9-5a3641c16eed] Running
	I0717 00:06:10.716837   20973 system_pods.go:61] "tiller-deploy-6677d64bcd-5nxgc" [77b4eedd-c82b-401f-9057-a7a11b13510b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 00:06:10.716847   20973 system_pods.go:74] duration metric: took 10.554229ms to wait for pod list to return data ...
	I0717 00:06:10.716858   20973 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:06:10.719747   20973 default_sa.go:45] found service account: "default"
	I0717 00:06:10.719767   20973 default_sa.go:55] duration metric: took 2.901867ms for default service account to be created ...
	I0717 00:06:10.719775   20973 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:06:10.732167   20973 system_pods.go:86] 18 kube-system pods found
	I0717 00:06:10.732195   20973 system_pods.go:89] "coredns-7db6d8ff4d-x569p" [1e4c6914-ede3-4b0b-b696-83768c15f61f] Running
	I0717 00:06:10.732206   20973 system_pods.go:89] "csi-hostpath-attacher-0" [1e997ac0-7c52-48b7-9a1a-bf461ba09162] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 00:06:10.732216   20973 system_pods.go:89] "csi-hostpath-resizer-0" [b4942844-db70-42e9-b530-db4bcfb28f68] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 00:06:10.732226   20973 system_pods.go:89] "csi-hostpathplugin-spxjk" [01553a53-f10f-43eb-8581-452ce918ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 00:06:10.732236   20973 system_pods.go:89] "etcd-addons-860537" [4ece71aa-f418-49c5-b9d6-328918a4520a] Running
	I0717 00:06:10.732243   20973 system_pods.go:89] "kube-apiserver-addons-860537" [2a014807-df86-4a41-bb77-45cdd720c9bc] Running
	I0717 00:06:10.732250   20973 system_pods.go:89] "kube-controller-manager-addons-860537" [c9390bef-106f-4c8f-b0c7-bdbb3cf6a3a7] Running
	I0717 00:06:10.732264   20973 system_pods.go:89] "kube-ingress-dns-minikube" [a772ebab-91ad-4da1-be93-836f7a6b65a9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 00:06:10.732273   20973 system_pods.go:89] "kube-proxy-6kwx2" [95bc49e4-c111-4184-83f6-14800ece6dc1] Running
	I0717 00:06:10.732283   20973 system_pods.go:89] "kube-scheduler-addons-860537" [f0353750-5ac0-464a-9f2c-1e926a5ba6dc] Running
	I0717 00:06:10.732293   20973 system_pods.go:89] "metrics-server-c59844bb4-zq4m7" [332284a0-4c05-4737-8669-c71012684bb2] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 00:06:10.732305   20973 system_pods.go:89] "nvidia-device-plugin-daemonset-pcbjh" [631d74e8-bdf2-43b3-b053-cdcade929069] Running
	I0717 00:06:10.732314   20973 system_pods.go:89] "registry-proxy-vpbzw" [961d65cb-7faf-4f3a-86ef-8916920fcba6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 00:06:10.732324   20973 system_pods.go:89] "registry-v6n4c" [66c9585d-752a-4ad2-9c99-b9bff568c44d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 00:06:10.732340   20973 system_pods.go:89] "snapshot-controller-745499f584-6fsd7" [a0ab2b73-f917-4c6c-95f8-d516cf54a3f1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:06:10.732353   20973 system_pods.go:89] "snapshot-controller-745499f584-z8rr5" [49153b18-e1ad-4512-9ead-6a432b9e0c7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:06:10.732361   20973 system_pods.go:89] "storage-provisioner" [71073df2-0967-430a-94e9-5a3641c16eed] Running
	I0717 00:06:10.732373   20973 system_pods.go:89] "tiller-deploy-6677d64bcd-5nxgc" [77b4eedd-c82b-401f-9057-a7a11b13510b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 00:06:10.732383   20973 system_pods.go:126] duration metric: took 12.600811ms to wait for k8s-apps to be running ...
	I0717 00:06:10.732397   20973 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:06:10.732446   20973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:06:10.770442   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:10.777108   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:10.778962   20973 system_svc.go:56] duration metric: took 46.562029ms WaitForService to wait for kubelet
	I0717 00:06:10.778982   20973 kubeadm.go:582] duration metric: took 22.090648397s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:06:10.779004   20973 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:06:10.783136   20973 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:06:10.783157   20973 node_conditions.go:123] node cpu capacity is 2
	I0717 00:06:10.783167   20973 node_conditions.go:105] duration metric: took 4.158763ms to run NodePressure ...
	I0717 00:06:10.783176   20973 start.go:241] waiting for startup goroutines ...
	I0717 00:06:10.884894   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:10.954274   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:11.270696   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:11.274091   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:11.384697   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:11.460046   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:11.770510   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:11.774870   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:11.885635   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:11.954667   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:12.273270   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:12.278815   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:12.384083   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:12.455613   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:12.771037   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:12.775894   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:12.883766   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:12.955108   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:13.272179   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:13.275126   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:13.384487   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:13.455480   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:13.770604   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:13.774304   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.127552   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.132254   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:14.283771   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:14.286030   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.384010   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.454505   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:14.771615   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:14.774785   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.883525   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.954724   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:15.271228   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:15.275234   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:15.384530   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:15.454276   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:15.770997   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:15.774894   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:15.884048   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:15.956402   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:16.270251   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:16.274555   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:16.383273   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:16.454946   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:16.771252   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:16.775018   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:16.884720   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:16.956410   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:17.271465   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:17.275566   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:17.384760   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:17.454693   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:17.771072   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:17.774747   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:17.883957   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:17.955199   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:18.270824   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:18.274201   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:18.386259   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:18.455479   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:18.780047   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:18.793097   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:18.884250   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:18.955906   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:19.271291   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.275143   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:19.383751   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:19.454682   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:19.771293   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.774461   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:19.884266   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:19.955263   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:20.272011   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:20.275476   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:20.385087   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:20.455942   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:20.771469   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:20.775282   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:20.884137   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:20.956857   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:21.270966   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:21.274644   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:21.383440   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:21.454674   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:21.771154   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:21.777147   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:21.883974   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:21.954751   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:22.272425   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:22.275186   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:22.384110   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:22.454954   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:22.772250   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:22.774672   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:22.883786   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:22.955186   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:23.621433   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:23.622076   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:23.622409   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:23.629748   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:23.770714   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:23.774437   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:23.884061   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:23.955465   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:24.271012   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:24.274430   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:24.383201   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:24.455048   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:24.771144   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:24.775483   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:24.883955   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:24.955238   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:25.270853   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:25.274266   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:25.384519   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:25.454563   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:25.770920   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:25.774858   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:25.884044   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:25.956885   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:26.335532   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:26.343315   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:26.384512   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:26.454247   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:26.770872   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:26.774776   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:26.883623   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:26.954250   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:27.271527   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:27.274948   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:27.383596   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:27.454908   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:27.770265   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:27.774086   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:27.883946   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:27.954469   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:28.270856   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:28.274560   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:28.678904   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:28.679860   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:28.771330   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:28.775705   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:28.884033   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:28.955475   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:29.270176   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:29.274123   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:29.384063   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:29.456230   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:29.770662   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:29.774319   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:29.883929   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:29.955469   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:30.270907   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:30.274591   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:30.383583   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:30.454667   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:30.773176   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:30.778300   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:30.884229   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:30.954778   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:31.271993   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:31.275554   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:31.384353   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:31.455822   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:31.770936   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:31.774447   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:31.883691   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:31.956027   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:32.270884   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:32.274457   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:32.384530   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:32.455529   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:32.771100   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:32.775072   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:32.884058   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:32.954993   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:33.271371   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:33.274303   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:33.384014   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:33.455171   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:33.780199   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:33.784226   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:33.883518   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:33.954765   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:34.271424   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:34.274425   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:34.383596   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:34.454560   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:34.771706   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:34.774750   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:34.885101   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:34.958350   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:35.271245   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:35.274830   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:35.383684   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:35.455469   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:35.807600   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:35.807754   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:35.912794   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:35.966387   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:36.271359   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:36.274836   20973 kapi.go:107] duration metric: took 38.504367856s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:06:36.383542   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:36.455197   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:36.771696   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:36.884414   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:36.954959   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:37.270891   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:37.383603   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:37.454651   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:37.771254   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:37.884151   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:37.954699   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:38.271212   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:38.383607   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:38.455495   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:38.846075   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:38.884206   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:38.956038   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:39.271628   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:39.384915   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:39.455044   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:39.771135   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:39.883804   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:39.954154   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:40.270731   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:40.388583   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:40.454569   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:40.770593   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:40.884320   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:40.962694   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:41.271182   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:41.383525   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:41.454202   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:41.770437   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:41.884544   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:41.955235   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:42.270140   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:42.384629   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:42.454534   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:42.773060   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:42.883738   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:42.954576   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:43.271077   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:43.383788   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:43.454639   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:43.770329   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:43.883343   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:43.954749   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:44.271309   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:44.384920   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:44.454633   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:44.771792   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:44.882878   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:44.954918   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:45.272890   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:45.384093   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:45.454737   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.231550   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:46.231977   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.239003   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.277888   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.387144   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:46.455506   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.770996   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.884195   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:46.957673   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:47.272782   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:47.383872   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:47.455405   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:47.770871   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:47.884553   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:47.954672   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:48.273699   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:48.390993   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:48.455027   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:48.777837   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:48.884569   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:48.954894   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:49.275860   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:49.384761   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:49.455429   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:49.770433   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:49.884577   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:49.954551   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:50.270806   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:50.386619   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:50.464738   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:50.773765   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:50.884220   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:50.964917   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:51.270682   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:51.383642   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:51.456388   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:51.770697   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:51.888696   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:51.960416   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:52.271706   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:52.384741   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:52.455098   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:52.770333   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:52.884069   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:52.954785   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:53.273009   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:53.386712   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:53.455112   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:53.771982   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:53.884513   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:53.955182   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:54.271503   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:54.385481   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:54.454436   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:54.771262   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:54.883632   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:54.954779   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:55.271557   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:55.384456   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:55.454343   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:55.771199   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:55.884289   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:55.955484   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:56.275413   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:56.384305   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:56.454999   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:56.770621   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:56.884516   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:56.955294   20973 kapi.go:107] duration metric: took 57.506059197s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:06:57.270270   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:57.383930   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:57.771289   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:57.884183   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:58.271600   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:58.383046   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:58.771509   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:58.884944   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:59.270654   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:59.384427   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:59.770756   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:59.883691   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:00.270898   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:00.383570   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:00.771218   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:00.885323   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:01.270764   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:01.383489   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:01.772498   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:01.884966   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:02.273616   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:02.384057   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:02.773064   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:02.883155   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:03.273005   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:03.383714   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:03.771438   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:03.888010   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:04.274349   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:04.390944   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:04.777206   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:04.887010   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:05.272171   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:05.387357   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:05.770905   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:05.883443   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:06.270687   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:06.383559   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:06.820714   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:06.884690   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:07.271346   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:07.385509   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:07.771206   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:07.884685   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:08.270235   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.383693   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:08.790118   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.884478   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:09.306631   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.383956   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:09.772425   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.883498   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:10.282960   20973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:10.392164   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:10.771614   20973 kapi.go:107] duration metric: took 1m13.005926576s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:07:10.885948   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:11.384895   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:11.883732   20973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:07:12.383820   20973 kapi.go:107] duration metric: took 1m11.503762856s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:07:12.385435   20973 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-860537 cluster.
	I0717 00:07:12.386678   20973 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:07:12.387811   20973 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:07:12.389336   20973 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, metrics-server, nvidia-device-plugin, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 00:07:12.390488   20973 addons.go:510] duration metric: took 1m23.702134744s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns metrics-server nvidia-device-plugin helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 00:07:12.390524   20973 start.go:246] waiting for cluster config update ...
	I0717 00:07:12.390540   20973 start.go:255] writing updated cluster config ...
	I0717 00:07:12.390791   20973 ssh_runner.go:195] Run: rm -f paused
	I0717 00:07:12.439906   20973 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:07:12.441547   20973 out.go:177] * Done! kubectl is now configured to use "addons-860537" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.327279508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721175192327253695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=370c89f5-e947-44fb-87f9-05489ed3fd89 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.327920087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39630749-b2de-4c3f-b634-0b9054dc584f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.327989897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39630749-b2de-4c3f-b634-0b9054dc584f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.328285883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72cd5f094e299b8179e884eef96004efa244774d4294b711ff4bbc3af41a0c46,PodSandboxId:e74f7f74515b8a9ddbc3b6d06cd28a0dc55372b7b0a231fcb3a3787473b76523,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721175012544117643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-4hl58,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de2a9e7d-611b-4332-ba3c-d631603eed79,},Annotations:map[string]string{io.kubernetes.container.hash: f230b00f,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aeb9a53a78efb30ea9c9e8d2102c15f47adb5bb24e3e130a88b7b403dbae31,PodSandboxId:41d2b6c3006a33fc552d9fd5e4e865f8d467c62367c3b4e1ce2c7673ead0403b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721174871850489359,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19a96ab4-cd55-4419-b5a7-8b9e8823879f,},Annotations:map[string]string{io.kubernet
es.container.hash: 78f7281c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017f08795f68a8a748ca3978528da32e92543780a43c0a7bb490b2061d5dbed5,PodSandboxId:424a5c95da73b09ff2a452bfa53e0840802ebfc4ee27e19fddba405955f393f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721174848676635811,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-rw54z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 22484240-e20c-4ef5-a0da-50269ed47664,},Annotations:map[string]string{io.kubernetes.container.hash: 49b847eb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e,PodSandboxId:de67c66d118049e89e78e1921b5ce1cb66346dd480b01c8b20204723dbec2db6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721174831597632817,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q5sd8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36d0948a-8a19-4f23-b53e-3a648152fffb,},Annotations:map[string]string{io.kubernetes.container.hash: adcd5a98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354b0940f3ebdf913ebdb3f69e24dab26c45d80e7b300db4f838bbb2a6a84e0,PodSandboxId:749c2c994691e6ce06667302acac77001b8f7655df5a3480ac6078efbd0fc599,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:
CONTAINER_RUNNING,CreatedAt:1721174800132888251,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-dz45b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ba263468-5fa1-4873-a77c-8a7e8c823342,},Annotations:map[string]string{io.kubernetes.container.hash: a97e0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b924521f0efc85318dd15de892045d5cdfeed64a916871904e3aa5a54dd082ff,PodSandboxId:825f2231b6de9c3036ee45c8c9d2229d8a35eae7c247c28138cd8bba2c7b9592,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e468
91773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721174784104359949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-h6wwn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4e01edaf-fd5a-4055-adc7-3814ccc74e83,},Annotations:map[string]string{io.kubernetes.container.hash: 3582c1c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98,PodSandboxId:347ecc25291eb328e696b3b1b011705fda8af3ca4c8febe3af8c56f7475081ae,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e5
88f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721174761108799539,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-zq4m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 332284a0-4c05-4737-8669-c71012684bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d41a249,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762,PodSandboxId:40a89e8b774b2eaf3dcdb95c1e983163d964f50668976d4888e995015a9e298c,Metadata:&ContainerMetadata{Name:storage-provi
sioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721174757446145150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71073df2-0967-430a-94e9-5a3641c16eed,},Annotations:map[string]string{io.kubernetes.container.hash: babda854,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff,PodSandboxId:59f13eb22ca98f3e40c185c2cffb4fdee151409a08a25f26ea8c7256b8cc7f95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image
:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721174753977793670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x569p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e4c6914-ede3-4b0b-b696-83768c15f61f,},Annotations:map[string]string{io.kubernetes.container.hash: e05fb4eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90a0b8
d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6,PodSandboxId:0775f25ceaca034873b1f2ad4ad7d9c5182c41cc593a5d3f5a13cd4f51e10923,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721174750775514432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kwx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bc49e4-c111-4184-83f6-14800ece6dc1,},Annotations:map[string]string{io.kubernetes.container.hash: c3480f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6262ffd56c7a125e22a281b77eeaa64a1290b
d2861165394c264dba8c5696f,PodSandboxId:8ff22cb5467bd2c46084782c2ba9d24b711e1617234cda0ae434856e0366c202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721174729412037968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d60fd94d932d2ba8608f510ed5f190a,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4fe5f1ce8b2687a2
fc58e2f,PodSandboxId:e1ed3dab6c8e597298b8bd982950ce5eba8403cf7043c5825f919f68cf17712c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721174729407897443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9cd091645e319574d7f043d4df0944d,},Annotations:map[string]string{io.kubernetes.container.hash: 8aa49d05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9,PodSandboxId:f47fd7ea3b4550df5e19d53a60
c4abadc31d8ea21bd7cd329795fde4d861656f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721174729349060746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f96988fa3ac783d3dee6b95d6d3bfb5,},Annotations:map[string]string{io.kubernetes.container.hash: f347147a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8,PodSandboxId:3f3ff8d1f348a8df3b21eafbb8c9959556d0bc13008
539df19fdc49ba79dbb28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721174729240262165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94cfcc47ed48397882029d326991bf1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39630749-b2de-4c3f-b634-0b9054dc584f name=/runtime.v1.RuntimeService/ListCon
tainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.368225555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dea5f6c-8014-4e39-b52d-07f2ddba1e0c name=/runtime.v1.RuntimeService/Version
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.368313195Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dea5f6c-8014-4e39-b52d-07f2ddba1e0c name=/runtime.v1.RuntimeService/Version
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.369892267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=796f0fec-bfff-4e2a-b4a5-c67b3cabd24c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.371508006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721175192371475857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=796f0fec-bfff-4e2a-b4a5-c67b3cabd24c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.372202739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cbb0f00-6070-4626-a587-95e4eedd164f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.372278890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cbb0f00-6070-4626-a587-95e4eedd164f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.372571018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72cd5f094e299b8179e884eef96004efa244774d4294b711ff4bbc3af41a0c46,PodSandboxId:e74f7f74515b8a9ddbc3b6d06cd28a0dc55372b7b0a231fcb3a3787473b76523,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721175012544117643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-4hl58,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de2a9e7d-611b-4332-ba3c-d631603eed79,},Annotations:map[string]string{io.kubernetes.container.hash: f230b00f,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aeb9a53a78efb30ea9c9e8d2102c15f47adb5bb24e3e130a88b7b403dbae31,PodSandboxId:41d2b6c3006a33fc552d9fd5e4e865f8d467c62367c3b4e1ce2c7673ead0403b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721174871850489359,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19a96ab4-cd55-4419-b5a7-8b9e8823879f,},Annotations:map[string]string{io.kubernet
es.container.hash: 78f7281c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017f08795f68a8a748ca3978528da32e92543780a43c0a7bb490b2061d5dbed5,PodSandboxId:424a5c95da73b09ff2a452bfa53e0840802ebfc4ee27e19fddba405955f393f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721174848676635811,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-rw54z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 22484240-e20c-4ef5-a0da-50269ed47664,},Annotations:map[string]string{io.kubernetes.container.hash: 49b847eb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e,PodSandboxId:de67c66d118049e89e78e1921b5ce1cb66346dd480b01c8b20204723dbec2db6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721174831597632817,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q5sd8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36d0948a-8a19-4f23-b53e-3a648152fffb,},Annotations:map[string]string{io.kubernetes.container.hash: adcd5a98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354b0940f3ebdf913ebdb3f69e24dab26c45d80e7b300db4f838bbb2a6a84e0,PodSandboxId:749c2c994691e6ce06667302acac77001b8f7655df5a3480ac6078efbd0fc599,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:
CONTAINER_RUNNING,CreatedAt:1721174800132888251,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-dz45b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ba263468-5fa1-4873-a77c-8a7e8c823342,},Annotations:map[string]string{io.kubernetes.container.hash: a97e0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b924521f0efc85318dd15de892045d5cdfeed64a916871904e3aa5a54dd082ff,PodSandboxId:825f2231b6de9c3036ee45c8c9d2229d8a35eae7c247c28138cd8bba2c7b9592,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e468
91773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721174784104359949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-h6wwn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4e01edaf-fd5a-4055-adc7-3814ccc74e83,},Annotations:map[string]string{io.kubernetes.container.hash: 3582c1c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98,PodSandboxId:347ecc25291eb328e696b3b1b011705fda8af3ca4c8febe3af8c56f7475081ae,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e5
88f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721174761108799539,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-zq4m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 332284a0-4c05-4737-8669-c71012684bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d41a249,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762,PodSandboxId:40a89e8b774b2eaf3dcdb95c1e983163d964f50668976d4888e995015a9e298c,Metadata:&ContainerMetadata{Name:storage-provi
sioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721174757446145150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71073df2-0967-430a-94e9-5a3641c16eed,},Annotations:map[string]string{io.kubernetes.container.hash: babda854,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff,PodSandboxId:59f13eb22ca98f3e40c185c2cffb4fdee151409a08a25f26ea8c7256b8cc7f95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image
:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721174753977793670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x569p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e4c6914-ede3-4b0b-b696-83768c15f61f,},Annotations:map[string]string{io.kubernetes.container.hash: e05fb4eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90a0b8
d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6,PodSandboxId:0775f25ceaca034873b1f2ad4ad7d9c5182c41cc593a5d3f5a13cd4f51e10923,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721174750775514432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kwx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bc49e4-c111-4184-83f6-14800ece6dc1,},Annotations:map[string]string{io.kubernetes.container.hash: c3480f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6262ffd56c7a125e22a281b77eeaa64a1290b
d2861165394c264dba8c5696f,PodSandboxId:8ff22cb5467bd2c46084782c2ba9d24b711e1617234cda0ae434856e0366c202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721174729412037968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d60fd94d932d2ba8608f510ed5f190a,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4fe5f1ce8b2687a2
fc58e2f,PodSandboxId:e1ed3dab6c8e597298b8bd982950ce5eba8403cf7043c5825f919f68cf17712c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721174729407897443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9cd091645e319574d7f043d4df0944d,},Annotations:map[string]string{io.kubernetes.container.hash: 8aa49d05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9,PodSandboxId:f47fd7ea3b4550df5e19d53a60
c4abadc31d8ea21bd7cd329795fde4d861656f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721174729349060746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f96988fa3ac783d3dee6b95d6d3bfb5,},Annotations:map[string]string{io.kubernetes.container.hash: f347147a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8,PodSandboxId:3f3ff8d1f348a8df3b21eafbb8c9959556d0bc13008
539df19fdc49ba79dbb28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721174729240262165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94cfcc47ed48397882029d326991bf1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cbb0f00-6070-4626-a587-95e4eedd164f name=/runtime.v1.RuntimeService/ListCon
tainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.411856020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73b3e32a-901e-49d4-9720-6e7bf4e609e3 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.411944912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73b3e32a-901e-49d4-9720-6e7bf4e609e3 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.413374245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=162a5acc-5558-4e55-ab72-177a3f959153 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.414663733Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721175192414634992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=162a5acc-5558-4e55-ab72-177a3f959153 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.415290977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08dc5ee7-0939-4839-af77-d1585ce140cc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.415350688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08dc5ee7-0939-4839-af77-d1585ce140cc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.415907570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72cd5f094e299b8179e884eef96004efa244774d4294b711ff4bbc3af41a0c46,PodSandboxId:e74f7f74515b8a9ddbc3b6d06cd28a0dc55372b7b0a231fcb3a3787473b76523,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721175012544117643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-4hl58,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de2a9e7d-611b-4332-ba3c-d631603eed79,},Annotations:map[string]string{io.kubernetes.container.hash: f230b00f,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4aeb9a53a78efb30ea9c9e8d2102c15f47adb5bb24e3e130a88b7b403dbae31,PodSandboxId:41d2b6c3006a33fc552d9fd5e4e865f8d467c62367c3b4e1ce2c7673ead0403b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721174871850489359,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19a96ab4-cd55-4419-b5a7-8b9e8823879f,},Annotations:map[string]string{io.kubernet
es.container.hash: 78f7281c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:017f08795f68a8a748ca3978528da32e92543780a43c0a7bb490b2061d5dbed5,PodSandboxId:424a5c95da73b09ff2a452bfa53e0840802ebfc4ee27e19fddba405955f393f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721174848676635811,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-rw54z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 22484240-e20c-4ef5-a0da-50269ed47664,},Annotations:map[string]string{io.kubernetes.container.hash: 49b847eb,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e,PodSandboxId:de67c66d118049e89e78e1921b5ce1cb66346dd480b01c8b20204723dbec2db6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721174831597632817,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q5sd8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 36d0948a-8a19-4f23-b53e-3a648152fffb,},Annotations:map[string]string{io.kubernetes.container.hash: adcd5a98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6354b0940f3ebdf913ebdb3f69e24dab26c45d80e7b300db4f838bbb2a6a84e0,PodSandboxId:749c2c994691e6ce06667302acac77001b8f7655df5a3480ac6078efbd0fc599,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:
CONTAINER_RUNNING,CreatedAt:1721174800132888251,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-dz45b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ba263468-5fa1-4873-a77c-8a7e8c823342,},Annotations:map[string]string{io.kubernetes.container.hash: a97e0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b924521f0efc85318dd15de892045d5cdfeed64a916871904e3aa5a54dd082ff,PodSandboxId:825f2231b6de9c3036ee45c8c9d2229d8a35eae7c247c28138cd8bba2c7b9592,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e468
91773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721174784104359949,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-h6wwn,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 4e01edaf-fd5a-4055-adc7-3814ccc74e83,},Annotations:map[string]string{io.kubernetes.container.hash: 3582c1c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98,PodSandboxId:347ecc25291eb328e696b3b1b011705fda8af3ca4c8febe3af8c56f7475081ae,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e5
88f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721174761108799539,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-zq4m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 332284a0-4c05-4737-8669-c71012684bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d41a249,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762,PodSandboxId:40a89e8b774b2eaf3dcdb95c1e983163d964f50668976d4888e995015a9e298c,Metadata:&ContainerMetadata{Name:storage-provi
sioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721174757446145150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71073df2-0967-430a-94e9-5a3641c16eed,},Annotations:map[string]string{io.kubernetes.container.hash: babda854,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff,PodSandboxId:59f13eb22ca98f3e40c185c2cffb4fdee151409a08a25f26ea8c7256b8cc7f95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image
:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721174753977793670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x569p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e4c6914-ede3-4b0b-b696-83768c15f61f,},Annotations:map[string]string{io.kubernetes.container.hash: e05fb4eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90a0b8
d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6,PodSandboxId:0775f25ceaca034873b1f2ad4ad7d9c5182c41cc593a5d3f5a13cd4f51e10923,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721174750775514432,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kwx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bc49e4-c111-4184-83f6-14800ece6dc1,},Annotations:map[string]string{io.kubernetes.container.hash: c3480f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6262ffd56c7a125e22a281b77eeaa64a1290b
d2861165394c264dba8c5696f,PodSandboxId:8ff22cb5467bd2c46084782c2ba9d24b711e1617234cda0ae434856e0366c202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721174729412037968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d60fd94d932d2ba8608f510ed5f190a,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4fe5f1ce8b2687a2
fc58e2f,PodSandboxId:e1ed3dab6c8e597298b8bd982950ce5eba8403cf7043c5825f919f68cf17712c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721174729407897443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9cd091645e319574d7f043d4df0944d,},Annotations:map[string]string{io.kubernetes.container.hash: 8aa49d05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9,PodSandboxId:f47fd7ea3b4550df5e19d53a60
c4abadc31d8ea21bd7cd329795fde4d861656f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721174729349060746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f96988fa3ac783d3dee6b95d6d3bfb5,},Annotations:map[string]string{io.kubernetes.container.hash: f347147a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8,PodSandboxId:3f3ff8d1f348a8df3b21eafbb8c9959556d0bc13008
539df19fdc49ba79dbb28,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721174729240262165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-860537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94cfcc47ed48397882029d326991bf1f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08dc5ee7-0939-4839-af77-d1585ce140cc name=/runtime.v1.RuntimeService/ListCon
tainers
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.436826942Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98.28FTQ2\"" file="server/server.go:805"
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.436934240Z" level=debug msg="Container or sandbox exited: f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98.28FTQ2" file="server/server.go:810"
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.436976089Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98\"" file="server/server.go:805"
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.436994920Z" level=debug msg="Container or sandbox exited: f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98" file="server/server.go:810"
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.437015357Z" level=debug msg="container exited and found: f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98" file="server/server.go:825"
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.437054540Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98.28FTQ2\"" file="server/server.go:805"
	Jul 17 00:13:12 addons-860537 crio[687]: time="2024-07-17 00:13:12.437108361Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/f6a0958232c4d0cbf85d9c18df41696a349e1a6a0f6f5defb4f1dc6a246a7e98.28FTQ2\"" file="server/server.go:805"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	72cd5f094e299       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   e74f7f74515b8       hello-world-app-6778b5fc9f-4hl58
	e4aeb9a53a78e       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   41d2b6c3006a3       nginx
	017f08795f68a       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   424a5c95da73b       headlamp-7867546754-rw54z
	0e72ac612a045       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   de67c66d11804       gcp-auth-5db96cd9b4-q5sd8
	6354b0940f3eb       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   749c2c994691e       local-path-provisioner-8d985888d-dz45b
	b924521f0efc8       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         6 minutes ago       Running             yakd                      0                   825f2231b6de9       yakd-dashboard-799879c74f-h6wwn
	f6a0958232c4d       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   347ecc25291eb       metrics-server-c59844bb4-zq4m7
	614282a521d58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   40a89e8b774b2       storage-provisioner
	e9267303f1604       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   59f13eb22ca98       coredns-7db6d8ff4d-x569p
	90a0b8d487576       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                        7 minutes ago       Running             kube-proxy                0                   0775f25ceaca0       kube-proxy-6kwx2
	9e6262ffd56c7       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                        7 minutes ago       Running             kube-scheduler            0                   8ff22cb5467bd       kube-scheduler-addons-860537
	5b03f56d8b1d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   e1ed3dab6c8e5       etcd-addons-860537
	70759f229bbf2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                        7 minutes ago       Running             kube-apiserver            0                   f47fd7ea3b455       kube-apiserver-addons-860537
	a177722461d94       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                        7 minutes ago       Running             kube-controller-manager   0                   3f3ff8d1f348a       kube-controller-manager-addons-860537
	
	
	==> coredns [e9267303f1604897f5cf761e45ef2ed1f785ce69e518b730078260f842874cff] <==
	[INFO] 10.244.0.22:47384 - 59305 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0010512s
	[INFO] 10.244.0.22:58003 - 7769 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123783s
	[INFO] 10.244.0.22:51524 - 56012 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130288s
	[INFO] 10.244.0.22:33725 - 31098 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137565s
	[INFO] 10.244.0.22:53943 - 24036 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063995s
	[INFO] 10.244.0.22:53391 - 39142 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000971605s
	[INFO] 10.244.0.22:43098 - 59047 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001273392s
	[INFO] 10.244.0.26:37982 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000274064s
	[INFO] 10.244.0.26:44792 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127581s
	[INFO] 10.244.0.8:55787 - 4706 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000396096s
	[INFO] 10.244.0.8:55787 - 33126 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000580831s
	[INFO] 10.244.0.8:55912 - 23124 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000096631s
	[INFO] 10.244.0.8:55912 - 39511 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010171s
	[INFO] 10.244.0.8:54214 - 18215 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071519s
	[INFO] 10.244.0.8:54214 - 28961 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114459s
	[INFO] 10.244.0.8:56515 - 23268 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012098s
	[INFO] 10.244.0.8:56515 - 4582 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091374s
	[INFO] 10.244.0.8:46359 - 4922 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089402s
	[INFO] 10.244.0.8:46359 - 3383 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00008213s
	[INFO] 10.244.0.8:44763 - 43490 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051744s
	[INFO] 10.244.0.8:44763 - 29676 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114211s
	[INFO] 10.244.0.8:48501 - 21602 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050389s
	[INFO] 10.244.0.8:48501 - 25184 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005842s
	[INFO] 10.244.0.8:58290 - 18114 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000069139s
	[INFO] 10.244.0.8:58290 - 15557 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043566s
	
	
	==> describe nodes <==
	Name:               addons-860537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-860537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=addons-860537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_05_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-860537
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:05:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-860537
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:13:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:10:41 +0000   Wed, 17 Jul 2024 00:05:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:10:41 +0000   Wed, 17 Jul 2024 00:05:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:10:41 +0000   Wed, 17 Jul 2024 00:05:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:10:41 +0000   Wed, 17 Jul 2024 00:05:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    addons-860537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c671ebb5ea348aeab41add3caf066ee
	  System UUID:                5c671ebb-5ea3-48ae-ab41-add3caf066ee
	  Boot ID:                    cf2dd3c3-1cd2-4106-8254-8d19829cd428
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-4hl58          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  gcp-auth                    gcp-auth-5db96cd9b4-q5sd8                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  headlamp                    headlamp-7867546754-rw54z                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 coredns-7db6d8ff4d-x569p                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m24s
	  kube-system                 etcd-addons-860537                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-apiserver-addons-860537              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-controller-manager-addons-860537     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-proxy-6kwx2                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-scheduler-addons-860537              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 metrics-server-c59844bb4-zq4m7            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m20s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  local-path-storage          local-path-provisioner-8d985888d-dz45b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  yakd-dashboard              yakd-dashboard-799879c74f-h6wwn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m21s  kube-proxy       
	  Normal  Starting                 7m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m38s  kubelet          Node addons-860537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s  kubelet          Node addons-860537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s  kubelet          Node addons-860537 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m37s  kubelet          Node addons-860537 status is now: NodeReady
	  Normal  RegisteredNode           7m24s  node-controller  Node addons-860537 event: Registered Node addons-860537 in Controller
	
	
	==> dmesg <==
	[  +0.096676] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.431754] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +0.116148] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.170952] kauditd_printk_skb: 105 callbacks suppressed
	[  +5.045436] kauditd_printk_skb: 126 callbacks suppressed
	[Jul17 00:06] kauditd_printk_skb: 101 callbacks suppressed
	[ +24.813034] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.347923] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.895175] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.966756] kauditd_printk_skb: 59 callbacks suppressed
	[Jul17 00:07] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.353377] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.166629] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.738567] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.050701] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.623863] kauditd_printk_skb: 33 callbacks suppressed
	[  +9.017464] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.854283] kauditd_printk_skb: 35 callbacks suppressed
	[ +14.652610] kauditd_printk_skb: 13 callbacks suppressed
	[Jul17 00:08] kauditd_printk_skb: 2 callbacks suppressed
	[ +23.727612] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.261115] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.360842] kauditd_printk_skb: 6 callbacks suppressed
	[Jul17 00:10] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.147405] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [5b03f56d8b1d6fc271362e7a60c4eedfb507e3c3d4fe5f1ce8b2687a2fc58e2f] <==
	{"level":"info","ts":"2024-07-17T00:06:46.208406Z","caller":"traceutil/trace.go:171","msg":"trace[1516546414] linearizableReadLoop","detail":"{readStateIndex:1069; appliedIndex:1068; }","duration":"452.061456ms","start":"2024-07-17T00:06:45.756327Z","end":"2024-07-17T00:06:46.208389Z","steps":["trace[1516546414] 'read index received'  (duration: 451.906328ms)","trace[1516546414] 'applied index is now lower than readState.Index'  (duration: 154.631µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:06:46.209279Z","caller":"traceutil/trace.go:171","msg":"trace[1498206063] transaction","detail":"{read_only:false; response_revision:1039; number_of_response:1; }","duration":"496.609715ms","start":"2024-07-17T00:06:45.712649Z","end":"2024-07-17T00:06:46.209259Z","steps":["trace[1498206063] 'process raft request'  (duration: 495.629827ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:06:46.209459Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:06:45.712632Z","time spent":"496.721639ms","remote":"127.0.0.1:34550","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-860537\" mod_revision:945 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-860537\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-860537\" > >"}
	{"level":"warn","ts":"2024-07-17T00:06:46.21042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"454.08225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14358"}
	{"level":"info","ts":"2024-07-17T00:06:46.210482Z","caller":"traceutil/trace.go:171","msg":"trace[1761662920] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1039; }","duration":"454.162715ms","start":"2024-07-17T00:06:45.756304Z","end":"2024-07-17T00:06:46.210467Z","steps":["trace[1761662920] 'agreement among raft nodes before linearized reading'  (duration: 454.022518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:06:46.210508Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:06:45.75629Z","time spent":"454.209743ms","remote":"127.0.0.1:34462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14382,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-07-17T00:06:46.211335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.35064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11309"}
	{"level":"info","ts":"2024-07-17T00:06:46.211391Z","caller":"traceutil/trace.go:171","msg":"trace[878590856] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1039; }","duration":"341.422863ms","start":"2024-07-17T00:06:45.869955Z","end":"2024-07-17T00:06:46.211378Z","steps":["trace[878590856] 'agreement among raft nodes before linearized reading'  (duration: 341.103114ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:06:46.211413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:06:45.869943Z","time spent":"341.464242ms","remote":"127.0.0.1:34462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11333,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-17T00:06:46.213947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.234246ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-jqn6l\" ","response":"range_response_count:1 size:4239"}
	{"level":"info","ts":"2024-07-17T00:06:46.214121Z","caller":"traceutil/trace.go:171","msg":"trace[1836413192] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-jqn6l; range_end:; response_count:1; response_revision:1039; }","duration":"147.722834ms","start":"2024-07-17T00:06:46.066387Z","end":"2024-07-17T00:06:46.21411Z","steps":["trace[1836413192] 'agreement among raft nodes before linearized reading'  (duration: 145.819051ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:06:46.214798Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.864432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85504"}
	{"level":"info","ts":"2024-07-17T00:06:46.214907Z","caller":"traceutil/trace.go:171","msg":"trace[1934117135] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1039; }","duration":"275.993426ms","start":"2024-07-17T00:06:45.9389Z","end":"2024-07-17T00:06:46.214894Z","steps":["trace[1934117135] 'agreement among raft nodes before linearized reading'  (duration: 273.621564ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:06.799109Z","caller":"traceutil/trace.go:171","msg":"trace[208696071] linearizableReadLoop","detail":"{readStateIndex:1163; appliedIndex:1162; }","duration":"219.199233ms","start":"2024-07-17T00:07:06.579817Z","end":"2024-07-17T00:07:06.799016Z","steps":["trace[208696071] 'read index received'  (duration: 218.585581ms)","trace[208696071] 'applied index is now lower than readState.Index'  (duration: 612.965µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:07:06.799941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.089401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-17T00:07:06.80006Z","caller":"traceutil/trace.go:171","msg":"trace[1668775163] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1128; }","duration":"220.271884ms","start":"2024-07-17T00:07:06.579773Z","end":"2024-07-17T00:07:06.800045Z","steps":["trace[1668775163] 'agreement among raft nodes before linearized reading'  (duration: 220.052825ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:06.800157Z","caller":"traceutil/trace.go:171","msg":"trace[1010560812] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"226.394688ms","start":"2024-07-17T00:07:06.573753Z","end":"2024-07-17T00:07:06.800148Z","steps":["trace[1010560812] 'process raft request'  (duration: 224.705939ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:07:09.288112Z","caller":"traceutil/trace.go:171","msg":"trace[1535830678] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1167; }","duration":"238.997396ms","start":"2024-07-17T00:07:09.049096Z","end":"2024-07-17T00:07:09.288093Z","steps":["trace[1535830678] 'read index received'  (duration: 234.521743ms)","trace[1535830678] 'applied index is now lower than readState.Index'  (duration: 4.474497ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:07:09.28835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.236552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-17T00:07:09.288403Z","caller":"traceutil/trace.go:171","msg":"trace[1869272989] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1132; }","duration":"239.319484ms","start":"2024-07-17T00:07:09.049071Z","end":"2024-07-17T00:07:09.28839Z","steps":["trace[1869272989] 'agreement among raft nodes before linearized reading'  (duration: 239.176445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:07:26.7323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.979813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-07-17T00:07:26.732381Z","caller":"traceutil/trace.go:171","msg":"trace[1432672864] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1312; }","duration":"103.099702ms","start":"2024-07-17T00:07:26.629262Z","end":"2024-07-17T00:07:26.732362Z","steps":["trace[1432672864] 'range keys from in-memory index tree'  (duration: 102.842956ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:08:33.678834Z","caller":"traceutil/trace.go:171","msg":"trace[248690361] linearizableReadLoop","detail":"{readStateIndex:1728; appliedIndex:1727; }","duration":"101.19658ms","start":"2024-07-17T00:08:33.577578Z","end":"2024-07-17T00:08:33.678774Z","steps":["trace[248690361] 'read index received'  (duration: 100.972451ms)","trace[248690361] 'applied index is now lower than readState.Index'  (duration: 223.075µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:08:33.679163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.515953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-snapshotter-leaderelection\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:08:33.67928Z","caller":"traceutil/trace.go:171","msg":"trace[1759932876] range","detail":"{range_begin:/registry/roles/kube-system/external-snapshotter-leaderelection; range_end:; response_count:0; response_revision:1665; }","duration":"101.704275ms","start":"2024-07-17T00:08:33.577552Z","end":"2024-07-17T00:08:33.679256Z","steps":["trace[1759932876] 'agreement among raft nodes before linearized reading'  (duration: 101.507072ms)"],"step_count":1}
	
	
	==> gcp-auth [0e72ac612a045e5b4a380c6a285d77d09037d47c93c00e629abed0a31e9e8b7e] <==
	2024/07/17 00:07:11 GCP Auth Webhook started!
	2024/07/17 00:07:12 Ready to marshal response ...
	2024/07/17 00:07:12 Ready to write response ...
	2024/07/17 00:07:12 Ready to marshal response ...
	2024/07/17 00:07:12 Ready to write response ...
	2024/07/17 00:07:21 Ready to marshal response ...
	2024/07/17 00:07:21 Ready to write response ...
	2024/07/17 00:07:22 Ready to marshal response ...
	2024/07/17 00:07:22 Ready to write response ...
	2024/07/17 00:07:22 Ready to marshal response ...
	2024/07/17 00:07:22 Ready to write response ...
	2024/07/17 00:07:22 Ready to marshal response ...
	2024/07/17 00:07:22 Ready to write response ...
	2024/07/17 00:07:22 Ready to marshal response ...
	2024/07/17 00:07:22 Ready to write response ...
	2024/07/17 00:07:25 Ready to marshal response ...
	2024/07/17 00:07:25 Ready to write response ...
	2024/07/17 00:07:48 Ready to marshal response ...
	2024/07/17 00:07:48 Ready to write response ...
	2024/07/17 00:07:57 Ready to marshal response ...
	2024/07/17 00:07:57 Ready to write response ...
	2024/07/17 00:08:19 Ready to marshal response ...
	2024/07/17 00:08:19 Ready to write response ...
	2024/07/17 00:10:11 Ready to marshal response ...
	2024/07/17 00:10:11 Ready to write response ...
	
	
	==> kernel <==
	 00:13:12 up 8 min,  0 users,  load average: 0.14, 0.73, 0.55
	Linux addons-860537 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [70759f229bbf27cec5cd2c67572fdb817b6cb5f562dd0fa5b3befe52e07b6cb9] <==
	W0717 00:07:12.697284       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 00:07:12.697413       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0717 00:07:12.698757       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.177.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.177.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.177.0:443: connect: connection refused
	E0717 00:07:12.703352       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.177.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.177.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.177.0:443: connect: connection refused
	I0717 00:07:12.840257       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 00:07:22.556039       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.80.170"}
	E0717 00:07:30.248726       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.251:8443->10.244.0.28:59496: read: connection reset by peer
	I0717 00:07:43.293564       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 00:07:44.320617       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 00:07:48.803149       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 00:07:48.996780       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.151.34"}
	I0717 00:08:10.976132       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 00:08:35.784373       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:08:35.784947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:08:35.812132       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:08:35.812202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:08:35.849729       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:08:35.849781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:08:35.862952       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:08:35.863015       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 00:08:36.851072       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:08:36.863364       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:08:36.891472       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 00:10:11.371996       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.85.133"}
	
	
	==> kube-controller-manager [a177722461d94437949f90ba19d018220705caf3cbff6f498441d67ca21aeda8] <==
	W0717 00:10:46.007758       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:46.007791       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:10:58.193256       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:58.193316       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:10:58.967352       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:58.967453       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:11:21.069949       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:21.070247       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:11:31.659835       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:31.659965       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:11:47.257072       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:47.257193       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:11:52.727945       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:52.728078       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:10.934489       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:10.934637       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:18.508490       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:18.508607       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:30.422566       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:30.422752       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:41.595486       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:41.595632       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:43.569161       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:43.569260       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 00:13:11.321126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="13.224µs"
	
	
	==> kube-proxy [90a0b8d48757698d0e608dfe79b2fe94258e6c3b05b82f8c4085c8a9b7c185b6] <==
	I0717 00:05:51.586931       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:05:51.626209       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.251"]
	I0717 00:05:51.711525       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:05:51.711577       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:05:51.711594       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:05:51.720196       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:05:51.720422       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:05:51.720513       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:05:51.722559       1 config.go:192] "Starting service config controller"
	I0717 00:05:51.722585       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:05:51.722608       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:05:51.722612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:05:51.723035       1 config.go:319] "Starting node config controller"
	I0717 00:05:51.723041       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:05:51.822772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:05:51.822832       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:05:51.823607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9e6262ffd56c7a125e22a281b77eeaa64a1290bd2861165394c264dba8c5696f] <==
	W0717 00:05:32.856926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:05:32.857039       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:05:32.862870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:05:32.862958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:05:32.948937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:32.948984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:32.978124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:05:32.978155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:05:33.024619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:05:33.024789       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:05:33.030384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:05:33.030508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:05:33.061017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:05:33.061150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:05:33.138270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:33.138331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:33.161237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:05:33.161342       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:05:33.227026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:33.227129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:33.256373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:05:33.256418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:05:33.420513       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:05:33.421257       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:05:36.097807       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:10:16 addons-860537 kubelet[1285]: I0717 00:10:16.898411    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113"} err="failed to get container status \"a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113\": rpc error: code = NotFound desc = could not find container \"a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113\": container with ID starting with a5134eb267dde3768cd570cc7337f30d2f747068198d5ea685ad9a26fd6e8113 not found: ID does not exist"
	Jul 17 00:10:18 addons-860537 kubelet[1285]: I0717 00:10:18.716317    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69063ca2-2bf1-4ab4-a22d-d60ecab85951" path="/var/lib/kubelet/pods/69063ca2-2bf1-4ab4-a22d-d60ecab85951/volumes"
	Jul 17 00:10:34 addons-860537 kubelet[1285]: E0717 00:10:34.755596    1285 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:10:34 addons-860537 kubelet[1285]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:10:34 addons-860537 kubelet[1285]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:10:34 addons-860537 kubelet[1285]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:10:34 addons-860537 kubelet[1285]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:10:35 addons-860537 kubelet[1285]: I0717 00:10:35.277793    1285 scope.go:117] "RemoveContainer" containerID="d93744428c129022203eafb305d53c6b3d3126455899fe8e66edda7ad2f34549"
	Jul 17 00:10:35 addons-860537 kubelet[1285]: I0717 00:10:35.300090    1285 scope.go:117] "RemoveContainer" containerID="efec1a7218ba11240305e59bd2e782259b8e3a954de33c9df97a35cd263fb1d9"
	Jul 17 00:11:34 addons-860537 kubelet[1285]: E0717 00:11:34.756122    1285 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:11:34 addons-860537 kubelet[1285]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:11:34 addons-860537 kubelet[1285]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:11:34 addons-860537 kubelet[1285]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:11:34 addons-860537 kubelet[1285]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:12:34 addons-860537 kubelet[1285]: E0717 00:12:34.755830    1285 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:12:34 addons-860537 kubelet[1285]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:12:34 addons-860537 kubelet[1285]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:12:34 addons-860537 kubelet[1285]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:12:34 addons-860537 kubelet[1285]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:13:12 addons-860537 kubelet[1285]: I0717 00:13:12.847959    1285 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgg9b\" (UniqueName: \"kubernetes.io/projected/332284a0-4c05-4737-8669-c71012684bb2-kube-api-access-bgg9b\") pod \"332284a0-4c05-4737-8669-c71012684bb2\" (UID: \"332284a0-4c05-4737-8669-c71012684bb2\") "
	Jul 17 00:13:12 addons-860537 kubelet[1285]: I0717 00:13:12.848031    1285 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/332284a0-4c05-4737-8669-c71012684bb2-tmp-dir\") pod \"332284a0-4c05-4737-8669-c71012684bb2\" (UID: \"332284a0-4c05-4737-8669-c71012684bb2\") "
	Jul 17 00:13:12 addons-860537 kubelet[1285]: I0717 00:13:12.848485    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332284a0-4c05-4737-8669-c71012684bb2-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "332284a0-4c05-4737-8669-c71012684bb2" (UID: "332284a0-4c05-4737-8669-c71012684bb2"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 17 00:13:12 addons-860537 kubelet[1285]: I0717 00:13:12.856551    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332284a0-4c05-4737-8669-c71012684bb2-kube-api-access-bgg9b" (OuterVolumeSpecName: "kube-api-access-bgg9b") pod "332284a0-4c05-4737-8669-c71012684bb2" (UID: "332284a0-4c05-4737-8669-c71012684bb2"). InnerVolumeSpecName "kube-api-access-bgg9b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:13:12 addons-860537 kubelet[1285]: I0717 00:13:12.948327    1285 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bgg9b\" (UniqueName: \"kubernetes.io/projected/332284a0-4c05-4737-8669-c71012684bb2-kube-api-access-bgg9b\") on node \"addons-860537\" DevicePath \"\""
	Jul 17 00:13:12 addons-860537 kubelet[1285]: I0717 00:13:12.948373    1285 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/332284a0-4c05-4737-8669-c71012684bb2-tmp-dir\") on node \"addons-860537\" DevicePath \"\""
	
	
	==> storage-provisioner [614282a521d58d24e3137e97082a860d78febe30c3660bd7c9ee1780d71ca762] <==
	I0717 00:05:58.949775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:05:59.057000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:05:59.057290       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:05:59.136110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:05:59.136282       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-860537_501617fd-6546-4595-b758-09f40858752a!
	I0717 00:05:59.136808       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbfc3767-c447-4add-919d-ab78363ddc31", APIVersion:"v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-860537_501617fd-6546-4595-b758-09f40858752a became leader
	I0717 00:05:59.243553       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-860537_501617fd-6546-4595-b758-09f40858752a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-860537 -n addons-860537
helpers_test.go:261: (dbg) Run:  kubectl --context addons-860537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-zq4m7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-860537 describe pod metrics-server-c59844bb4-zq4m7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-860537 describe pod metrics-server-c59844bb4-zq4m7: exit status 1 (61.19969ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-zq4m7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-860537 describe pod metrics-server-c59844bb4-zq4m7: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (355.23s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-860537
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-860537: exit status 82 (2m0.452829528s)

                                                
                                                
-- stdout --
	* Stopping node "addons-860537"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-860537" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-860537
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-860537: exit status 11 (21.569367357s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-860537" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-860537
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-860537: exit status 11 (6.143356377s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-860537" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-860537
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-860537: exit status 11 (6.143071914s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-860537" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image rm docker.io/kicbase/echo-server:functional-598951 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-598951 image rm docker.io/kicbase/echo-server:functional-598951 --alsologtostderr: (2.381758811s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls
functional_test.go:402: expected "docker.io/kicbase/echo-server:functional-598951" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (2.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 node stop m02 -v=7 --alsologtostderr
E0717 00:24:39.220303   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:59.701092   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:25:40.662131   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.464515717s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565881-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:24:37.828998   34863 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:24:37.829119   34863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:24:37.829127   34863 out.go:304] Setting ErrFile to fd 2...
	I0717 00:24:37.829131   34863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:24:37.829295   34863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:24:37.829539   34863 mustload.go:65] Loading cluster: ha-565881
	I0717 00:24:37.829906   34863 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:24:37.829940   34863 stop.go:39] StopHost: ha-565881-m02
	I0717 00:24:37.830326   34863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:24:37.830379   34863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:24:37.845416   34863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45797
	I0717 00:24:37.845824   34863 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:24:37.846479   34863 main.go:141] libmachine: Using API Version  1
	I0717 00:24:37.846506   34863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:24:37.846855   34863 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:24:37.849265   34863 out.go:177] * Stopping node "ha-565881-m02"  ...
	I0717 00:24:37.850754   34863 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:24:37.850797   34863 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:24:37.851005   34863 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:24:37.851035   34863 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:24:37.853506   34863 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:24:37.853881   34863 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:24:37.853929   34863 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:24:37.854010   34863 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:24:37.854176   34863 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:24:37.854330   34863 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:24:37.854443   34863 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:24:37.944202   34863 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 00:24:37.999158   34863 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 00:24:38.054332   34863 main.go:141] libmachine: Stopping "ha-565881-m02"...
	I0717 00:24:38.054381   34863 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:24:38.055786   34863 main.go:141] libmachine: (ha-565881-m02) Calling .Stop
	I0717 00:24:38.059489   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 0/120
	I0717 00:24:39.060627   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 1/120
	I0717 00:24:40.062039   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 2/120
	I0717 00:24:41.064000   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 3/120
	I0717 00:24:42.065609   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 4/120
	I0717 00:24:43.067601   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 5/120
	I0717 00:24:44.069083   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 6/120
	I0717 00:24:45.070848   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 7/120
	I0717 00:24:46.072073   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 8/120
	I0717 00:24:47.073577   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 9/120
	I0717 00:24:48.075442   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 10/120
	I0717 00:24:49.076984   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 11/120
	I0717 00:24:50.078240   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 12/120
	I0717 00:24:51.079824   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 13/120
	I0717 00:24:52.081722   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 14/120
	I0717 00:24:53.083329   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 15/120
	I0717 00:24:54.084534   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 16/120
	I0717 00:24:55.085760   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 17/120
	I0717 00:24:56.087069   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 18/120
	I0717 00:24:57.088329   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 19/120
	I0717 00:24:58.090351   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 20/120
	I0717 00:24:59.091990   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 21/120
	I0717 00:25:00.093828   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 22/120
	I0717 00:25:01.095240   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 23/120
	I0717 00:25:02.096576   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 24/120
	I0717 00:25:03.098491   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 25/120
	I0717 00:25:04.100834   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 26/120
	I0717 00:25:05.102936   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 27/120
	I0717 00:25:06.104342   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 28/120
	I0717 00:25:07.105908   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 29/120
	I0717 00:25:08.107073   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 30/120
	I0717 00:25:09.108523   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 31/120
	I0717 00:25:10.110121   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 32/120
	I0717 00:25:11.111654   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 33/120
	I0717 00:25:12.113476   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 34/120
	I0717 00:25:13.115665   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 35/120
	I0717 00:25:14.117642   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 36/120
	I0717 00:25:15.118824   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 37/120
	I0717 00:25:16.120477   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 38/120
	I0717 00:25:17.122095   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 39/120
	I0717 00:25:18.123997   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 40/120
	I0717 00:25:19.125301   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 41/120
	I0717 00:25:20.127077   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 42/120
	I0717 00:25:21.128931   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 43/120
	I0717 00:25:22.130201   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 44/120
	I0717 00:25:23.131787   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 45/120
	I0717 00:25:24.133306   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 46/120
	I0717 00:25:25.134966   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 47/120
	I0717 00:25:26.136170   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 48/120
	I0717 00:25:27.137338   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 49/120
	I0717 00:25:28.139310   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 50/120
	I0717 00:25:29.140656   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 51/120
	I0717 00:25:30.141969   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 52/120
	I0717 00:25:31.143214   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 53/120
	I0717 00:25:32.144799   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 54/120
	I0717 00:25:33.146515   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 55/120
	I0717 00:25:34.147814   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 56/120
	I0717 00:25:35.149256   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 57/120
	I0717 00:25:36.150618   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 58/120
	I0717 00:25:37.152187   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 59/120
	I0717 00:25:38.154117   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 60/120
	I0717 00:25:39.156428   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 61/120
	I0717 00:25:40.157825   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 62/120
	I0717 00:25:41.159082   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 63/120
	I0717 00:25:42.160385   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 64/120
	I0717 00:25:43.162181   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 65/120
	I0717 00:25:44.163406   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 66/120
	I0717 00:25:45.164896   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 67/120
	I0717 00:25:46.167086   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 68/120
	I0717 00:25:47.168540   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 69/120
	I0717 00:25:48.170304   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 70/120
	I0717 00:25:49.172092   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 71/120
	I0717 00:25:50.173414   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 72/120
	I0717 00:25:51.174756   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 73/120
	I0717 00:25:52.176380   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 74/120
	I0717 00:25:53.178158   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 75/120
	I0717 00:25:54.179403   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 76/120
	I0717 00:25:55.180753   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 77/120
	I0717 00:25:56.182955   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 78/120
	I0717 00:25:57.184190   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 79/120
	I0717 00:25:58.186279   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 80/120
	I0717 00:25:59.187595   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 81/120
	I0717 00:26:00.188886   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 82/120
	I0717 00:26:01.191177   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 83/120
	I0717 00:26:02.192376   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 84/120
	I0717 00:26:03.194387   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 85/120
	I0717 00:26:04.197080   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 86/120
	I0717 00:26:05.199339   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 87/120
	I0717 00:26:06.200761   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 88/120
	I0717 00:26:07.203046   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 89/120
	I0717 00:26:08.205466   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 90/120
	I0717 00:26:09.206966   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 91/120
	I0717 00:26:10.208282   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 92/120
	I0717 00:26:11.209518   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 93/120
	I0717 00:26:12.210898   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 94/120
	I0717 00:26:13.212725   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 95/120
	I0717 00:26:14.214077   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 96/120
	I0717 00:26:15.215491   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 97/120
	I0717 00:26:16.216780   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 98/120
	I0717 00:26:17.218332   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 99/120
	I0717 00:26:18.220288   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 100/120
	I0717 00:26:19.222020   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 101/120
	I0717 00:26:20.223579   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 102/120
	I0717 00:26:21.225590   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 103/120
	I0717 00:26:22.227072   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 104/120
	I0717 00:26:23.228932   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 105/120
	I0717 00:26:24.230262   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 106/120
	I0717 00:26:25.231648   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 107/120
	I0717 00:26:26.233286   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 108/120
	I0717 00:26:27.234727   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 109/120
	I0717 00:26:28.236897   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 110/120
	I0717 00:26:29.238975   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 111/120
	I0717 00:26:30.240079   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 112/120
	I0717 00:26:31.241487   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 113/120
	I0717 00:26:32.243040   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 114/120
	I0717 00:26:33.244997   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 115/120
	I0717 00:26:34.247222   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 116/120
	I0717 00:26:35.248764   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 117/120
	I0717 00:26:36.250895   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 118/120
	I0717 00:26:37.252427   34863 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 119/120
	I0717 00:26:38.253511   34863 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 00:26:38.253670   34863 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-565881 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 3 (19.023493731s)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:26:38.297685   35290 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:26:38.297945   35290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:26:38.297953   35290 out.go:304] Setting ErrFile to fd 2...
	I0717 00:26:38.297957   35290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:26:38.298111   35290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:26:38.298279   35290 out.go:298] Setting JSON to false
	I0717 00:26:38.298305   35290 mustload.go:65] Loading cluster: ha-565881
	I0717 00:26:38.298339   35290 notify.go:220] Checking for updates...
	I0717 00:26:38.298638   35290 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:26:38.298651   35290 status.go:255] checking status of ha-565881 ...
	I0717 00:26:38.299004   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:38.299044   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:38.314707   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0717 00:26:38.315139   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:38.315808   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:38.315858   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:38.316181   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:38.316416   35290 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:26:38.317926   35290 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:26:38.317943   35290 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:26:38.318259   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:38.318306   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:38.332467   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I0717 00:26:38.332885   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:38.333335   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:38.333355   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:38.333681   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:38.333856   35290 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:26:38.337266   35290 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:26:38.337716   35290 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:26:38.337738   35290 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:26:38.337881   35290 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:26:38.338213   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:38.338254   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:38.353030   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I0717 00:26:38.353518   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:38.353949   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:38.353968   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:38.354271   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:38.354451   35290 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:26:38.354645   35290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:26:38.354678   35290 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:26:38.357409   35290 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:26:38.357871   35290 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:26:38.357899   35290 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:26:38.358005   35290 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:26:38.358143   35290 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:26:38.358305   35290 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:26:38.358420   35290 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:26:38.445228   35290 ssh_runner.go:195] Run: systemctl --version
	I0717 00:26:38.452730   35290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:26:38.478331   35290 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:26:38.478366   35290 api_server.go:166] Checking apiserver status ...
	I0717 00:26:38.478408   35290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:26:38.495377   35290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:26:38.505225   35290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:26:38.505282   35290 ssh_runner.go:195] Run: ls
	I0717 00:26:38.510676   35290 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:26:38.514862   35290 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:26:38.514883   35290 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:26:38.514892   35290 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:26:38.514922   35290 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:26:38.515236   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:38.515277   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:38.530203   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43653
	I0717 00:26:38.530630   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:38.531093   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:38.531116   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:38.531399   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:38.531557   35290 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:26:38.533187   35290 status.go:330] ha-565881-m02 host status = "Running" (err=<nil>)
	I0717 00:26:38.533211   35290 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:26:38.533606   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:38.533645   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:38.548303   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46859
	I0717 00:26:38.548777   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:38.549281   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:38.549304   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:38.549620   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:38.549776   35290 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:26:38.552425   35290 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:26:38.552830   35290 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:26:38.552864   35290 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:26:38.552980   35290 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:26:38.553355   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:38.553394   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:38.568142   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0717 00:26:38.568516   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:38.569057   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:38.569079   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:38.569360   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:38.569539   35290 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:26:38.569682   35290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:26:38.569702   35290 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:26:38.572429   35290 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:26:38.572855   35290 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:26:38.572882   35290 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:26:38.573093   35290 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:26:38.573405   35290 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:26:38.573547   35290 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:26:38.573675   35290 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	W0717 00:26:56.912727   35290 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:26:56.912821   35290 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0717 00:26:56.912837   35290 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:26:56.912845   35290 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:26:56.912869   35290 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:26:56.912880   35290 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:26:56.913180   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:56.913216   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:56.927560   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32981
	I0717 00:26:56.927925   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:56.928410   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:56.928430   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:56.928754   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:56.928951   35290 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:26:56.930593   35290 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:26:56.930611   35290 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:26:56.930915   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:56.930950   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:56.945215   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42279
	I0717 00:26:56.945636   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:56.946218   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:56.946240   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:56.946512   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:56.946712   35290 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:26:56.949499   35290 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:26:56.949853   35290 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:26:56.949873   35290 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:26:56.950025   35290 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:26:56.950358   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:56.950393   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:56.964060   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42837
	I0717 00:26:56.964419   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:56.964835   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:56.964853   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:56.965141   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:56.965308   35290 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:26:56.965473   35290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:26:56.965491   35290 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:26:56.967946   35290 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:26:56.968378   35290 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:26:56.968405   35290 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:26:56.968527   35290 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:26:56.968701   35290 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:26:56.968844   35290 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:26:56.968966   35290 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:26:57.056044   35290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:26:57.076331   35290 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:26:57.076365   35290 api_server.go:166] Checking apiserver status ...
	I0717 00:26:57.076416   35290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:26:57.093324   35290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:26:57.104012   35290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:26:57.104062   35290 ssh_runner.go:195] Run: ls
	I0717 00:26:57.108976   35290 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:26:57.113113   35290 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:26:57.113136   35290 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:26:57.113146   35290 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:26:57.113164   35290 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:26:57.113456   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:57.113493   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:57.130277   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
	I0717 00:26:57.130680   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:57.131159   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:57.131182   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:57.131495   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:57.131701   35290 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:26:57.133537   35290 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:26:57.133555   35290 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:26:57.133939   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:57.133984   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:57.149515   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0717 00:26:57.150033   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:57.150531   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:57.150556   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:57.150881   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:57.151075   35290 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:26:57.153903   35290 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:26:57.154347   35290 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:26:57.154381   35290 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:26:57.154582   35290 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:26:57.154867   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:26:57.154909   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:26:57.169725   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40615
	I0717 00:26:57.170243   35290 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:26:57.170689   35290 main.go:141] libmachine: Using API Version  1
	I0717 00:26:57.170707   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:26:57.171056   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:26:57.171223   35290 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:26:57.171396   35290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:26:57.171421   35290 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:26:57.174294   35290 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:26:57.174792   35290 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:26:57.174830   35290 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:26:57.174997   35290 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:26:57.175185   35290 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:26:57.175358   35290 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:26:57.175494   35290 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:26:57.261460   35290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:26:57.277151   35290 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565881 -n ha-565881
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565881 logs -n 25: (1.410681509s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881:/home/docker/cp-test_ha-565881-m03_ha-565881.txt                      |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881 sudo cat                                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881.txt                                |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m02:/home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m04 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp testdata/cp-test.txt                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881:/home/docker/cp-test_ha-565881-m04_ha-565881.txt                      |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881 sudo cat                                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881.txt                                |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m02:/home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03:/home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m03 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-565881 node stop m02 -v=7                                                    | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:19:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:19:58.740650   30817 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:19:58.740769   30817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:19:58.740779   30817 out.go:304] Setting ErrFile to fd 2...
	I0717 00:19:58.740786   30817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:19:58.740972   30817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:19:58.741512   30817 out.go:298] Setting JSON to false
	I0717 00:19:58.742317   30817 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3748,"bootTime":1721171851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:19:58.742373   30817 start.go:139] virtualization: kvm guest
	I0717 00:19:58.744467   30817 out.go:177] * [ha-565881] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:19:58.745816   30817 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:19:58.745875   30817 notify.go:220] Checking for updates...
	I0717 00:19:58.748121   30817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:19:58.749407   30817 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:19:58.750607   30817 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:19:58.751754   30817 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:19:58.752866   30817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:19:58.754143   30817 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:19:58.787281   30817 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 00:19:58.788393   30817 start.go:297] selected driver: kvm2
	I0717 00:19:58.788410   30817 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:19:58.788423   30817 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:19:58.789142   30817 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:19:58.789222   30817 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:19:58.803958   30817 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:19:58.804000   30817 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:19:58.804221   30817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:19:58.804268   30817 cni.go:84] Creating CNI manager for ""
	I0717 00:19:58.804280   30817 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 00:19:58.804285   30817 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:19:58.804349   30817 start.go:340] cluster config:
	{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0717 00:19:58.804438   30817 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:19:58.805891   30817 out.go:177] * Starting "ha-565881" primary control-plane node in "ha-565881" cluster
	I0717 00:19:58.806911   30817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:19:58.806940   30817 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:19:58.806946   30817 cache.go:56] Caching tarball of preloaded images
	I0717 00:19:58.807007   30817 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:19:58.807016   30817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:19:58.807294   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:19:58.807314   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json: {Name:mk0bce3779ec18ce7d646e20c895f513860f7b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:19:58.807428   30817 start.go:360] acquireMachinesLock for ha-565881: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:19:58.807453   30817 start.go:364] duration metric: took 14.072µs to acquireMachinesLock for "ha-565881"
	I0717 00:19:58.807468   30817 start.go:93] Provisioning new machine with config: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:19:58.807517   30817 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 00:19:58.808930   30817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:19:58.809055   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:19:58.809092   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:19:58.822695   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0717 00:19:58.823149   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:19:58.823696   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:19:58.823715   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:19:58.824046   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:19:58.824222   30817 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:19:58.824434   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:19:58.824615   30817 start.go:159] libmachine.API.Create for "ha-565881" (driver="kvm2")
	I0717 00:19:58.824639   30817 client.go:168] LocalClient.Create starting
	I0717 00:19:58.824664   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 00:19:58.824695   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:19:58.824712   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:19:58.824761   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 00:19:58.824778   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:19:58.824788   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:19:58.824804   30817 main.go:141] libmachine: Running pre-create checks...
	I0717 00:19:58.824816   30817 main.go:141] libmachine: (ha-565881) Calling .PreCreateCheck
	I0717 00:19:58.825177   30817 main.go:141] libmachine: (ha-565881) Calling .GetConfigRaw
	I0717 00:19:58.825686   30817 main.go:141] libmachine: Creating machine...
	I0717 00:19:58.825700   30817 main.go:141] libmachine: (ha-565881) Calling .Create
	I0717 00:19:58.825859   30817 main.go:141] libmachine: (ha-565881) Creating KVM machine...
	I0717 00:19:58.827115   30817 main.go:141] libmachine: (ha-565881) DBG | found existing default KVM network
	I0717 00:19:58.827768   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:58.827647   30840 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045c0}
	I0717 00:19:58.827851   30817 main.go:141] libmachine: (ha-565881) DBG | created network xml: 
	I0717 00:19:58.827872   30817 main.go:141] libmachine: (ha-565881) DBG | <network>
	I0717 00:19:58.827883   30817 main.go:141] libmachine: (ha-565881) DBG |   <name>mk-ha-565881</name>
	I0717 00:19:58.827894   30817 main.go:141] libmachine: (ha-565881) DBG |   <dns enable='no'/>
	I0717 00:19:58.827905   30817 main.go:141] libmachine: (ha-565881) DBG |   
	I0717 00:19:58.827918   30817 main.go:141] libmachine: (ha-565881) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 00:19:58.827930   30817 main.go:141] libmachine: (ha-565881) DBG |     <dhcp>
	I0717 00:19:58.827942   30817 main.go:141] libmachine: (ha-565881) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 00:19:58.827954   30817 main.go:141] libmachine: (ha-565881) DBG |     </dhcp>
	I0717 00:19:58.827964   30817 main.go:141] libmachine: (ha-565881) DBG |   </ip>
	I0717 00:19:58.827971   30817 main.go:141] libmachine: (ha-565881) DBG |   
	I0717 00:19:58.827978   30817 main.go:141] libmachine: (ha-565881) DBG | </network>
	I0717 00:19:58.827985   30817 main.go:141] libmachine: (ha-565881) DBG | 
	I0717 00:19:58.832646   30817 main.go:141] libmachine: (ha-565881) DBG | trying to create private KVM network mk-ha-565881 192.168.39.0/24...
	I0717 00:19:58.895480   30817 main.go:141] libmachine: (ha-565881) DBG | private KVM network mk-ha-565881 192.168.39.0/24 created
	I0717 00:19:58.895511   30817 main.go:141] libmachine: (ha-565881) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881 ...
	I0717 00:19:58.895523   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:58.895474   30840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:19:58.895540   30817 main.go:141] libmachine: (ha-565881) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:19:58.895747   30817 main.go:141] libmachine: (ha-565881) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 00:19:59.131408   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:59.131300   30840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa...
	I0717 00:19:59.246760   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:59.246623   30840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/ha-565881.rawdisk...
	I0717 00:19:59.246795   30817 main.go:141] libmachine: (ha-565881) DBG | Writing magic tar header
	I0717 00:19:59.246806   30817 main.go:141] libmachine: (ha-565881) DBG | Writing SSH key tar header
	I0717 00:19:59.246814   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:59.246733   30840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881 ...
	I0717 00:19:59.246850   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881
	I0717 00:19:59.246873   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881 (perms=drwx------)
	I0717 00:19:59.246884   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 00:19:59.246895   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:19:59.246905   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 00:19:59.246912   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:19:59.246922   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:19:59.246933   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home
	I0717 00:19:59.246952   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:19:59.246963   30817 main.go:141] libmachine: (ha-565881) DBG | Skipping /home - not owner
	I0717 00:19:59.246979   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 00:19:59.246988   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 00:19:59.246997   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:19:59.247007   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:19:59.247021   30817 main.go:141] libmachine: (ha-565881) Creating domain...
	I0717 00:19:59.248198   30817 main.go:141] libmachine: (ha-565881) define libvirt domain using xml: 
	I0717 00:19:59.248216   30817 main.go:141] libmachine: (ha-565881) <domain type='kvm'>
	I0717 00:19:59.248222   30817 main.go:141] libmachine: (ha-565881)   <name>ha-565881</name>
	I0717 00:19:59.248227   30817 main.go:141] libmachine: (ha-565881)   <memory unit='MiB'>2200</memory>
	I0717 00:19:59.248232   30817 main.go:141] libmachine: (ha-565881)   <vcpu>2</vcpu>
	I0717 00:19:59.248238   30817 main.go:141] libmachine: (ha-565881)   <features>
	I0717 00:19:59.248244   30817 main.go:141] libmachine: (ha-565881)     <acpi/>
	I0717 00:19:59.248252   30817 main.go:141] libmachine: (ha-565881)     <apic/>
	I0717 00:19:59.248256   30817 main.go:141] libmachine: (ha-565881)     <pae/>
	I0717 00:19:59.248264   30817 main.go:141] libmachine: (ha-565881)     
	I0717 00:19:59.248280   30817 main.go:141] libmachine: (ha-565881)   </features>
	I0717 00:19:59.248284   30817 main.go:141] libmachine: (ha-565881)   <cpu mode='host-passthrough'>
	I0717 00:19:59.248289   30817 main.go:141] libmachine: (ha-565881)   
	I0717 00:19:59.248293   30817 main.go:141] libmachine: (ha-565881)   </cpu>
	I0717 00:19:59.248298   30817 main.go:141] libmachine: (ha-565881)   <os>
	I0717 00:19:59.248305   30817 main.go:141] libmachine: (ha-565881)     <type>hvm</type>
	I0717 00:19:59.248311   30817 main.go:141] libmachine: (ha-565881)     <boot dev='cdrom'/>
	I0717 00:19:59.248322   30817 main.go:141] libmachine: (ha-565881)     <boot dev='hd'/>
	I0717 00:19:59.248334   30817 main.go:141] libmachine: (ha-565881)     <bootmenu enable='no'/>
	I0717 00:19:59.248343   30817 main.go:141] libmachine: (ha-565881)   </os>
	I0717 00:19:59.248354   30817 main.go:141] libmachine: (ha-565881)   <devices>
	I0717 00:19:59.248375   30817 main.go:141] libmachine: (ha-565881)     <disk type='file' device='cdrom'>
	I0717 00:19:59.248419   30817 main.go:141] libmachine: (ha-565881)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/boot2docker.iso'/>
	I0717 00:19:59.248441   30817 main.go:141] libmachine: (ha-565881)       <target dev='hdc' bus='scsi'/>
	I0717 00:19:59.248449   30817 main.go:141] libmachine: (ha-565881)       <readonly/>
	I0717 00:19:59.248457   30817 main.go:141] libmachine: (ha-565881)     </disk>
	I0717 00:19:59.248463   30817 main.go:141] libmachine: (ha-565881)     <disk type='file' device='disk'>
	I0717 00:19:59.248471   30817 main.go:141] libmachine: (ha-565881)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:19:59.248480   30817 main.go:141] libmachine: (ha-565881)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/ha-565881.rawdisk'/>
	I0717 00:19:59.248487   30817 main.go:141] libmachine: (ha-565881)       <target dev='hda' bus='virtio'/>
	I0717 00:19:59.248495   30817 main.go:141] libmachine: (ha-565881)     </disk>
	I0717 00:19:59.248500   30817 main.go:141] libmachine: (ha-565881)     <interface type='network'>
	I0717 00:19:59.248508   30817 main.go:141] libmachine: (ha-565881)       <source network='mk-ha-565881'/>
	I0717 00:19:59.248512   30817 main.go:141] libmachine: (ha-565881)       <model type='virtio'/>
	I0717 00:19:59.248537   30817 main.go:141] libmachine: (ha-565881)     </interface>
	I0717 00:19:59.248576   30817 main.go:141] libmachine: (ha-565881)     <interface type='network'>
	I0717 00:19:59.248591   30817 main.go:141] libmachine: (ha-565881)       <source network='default'/>
	I0717 00:19:59.248601   30817 main.go:141] libmachine: (ha-565881)       <model type='virtio'/>
	I0717 00:19:59.248612   30817 main.go:141] libmachine: (ha-565881)     </interface>
	I0717 00:19:59.248622   30817 main.go:141] libmachine: (ha-565881)     <serial type='pty'>
	I0717 00:19:59.248635   30817 main.go:141] libmachine: (ha-565881)       <target port='0'/>
	I0717 00:19:59.248647   30817 main.go:141] libmachine: (ha-565881)     </serial>
	I0717 00:19:59.248665   30817 main.go:141] libmachine: (ha-565881)     <console type='pty'>
	I0717 00:19:59.248683   30817 main.go:141] libmachine: (ha-565881)       <target type='serial' port='0'/>
	I0717 00:19:59.248699   30817 main.go:141] libmachine: (ha-565881)     </console>
	I0717 00:19:59.248710   30817 main.go:141] libmachine: (ha-565881)     <rng model='virtio'>
	I0717 00:19:59.248722   30817 main.go:141] libmachine: (ha-565881)       <backend model='random'>/dev/random</backend>
	I0717 00:19:59.248732   30817 main.go:141] libmachine: (ha-565881)     </rng>
	I0717 00:19:59.248740   30817 main.go:141] libmachine: (ha-565881)     
	I0717 00:19:59.248744   30817 main.go:141] libmachine: (ha-565881)     
	I0717 00:19:59.248754   30817 main.go:141] libmachine: (ha-565881)   </devices>
	I0717 00:19:59.248765   30817 main.go:141] libmachine: (ha-565881) </domain>
	I0717 00:19:59.248776   30817 main.go:141] libmachine: (ha-565881) 
	I0717 00:19:59.252949   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:04:20:e4 in network default
	I0717 00:19:59.253428   30817 main.go:141] libmachine: (ha-565881) Ensuring networks are active...
	I0717 00:19:59.253444   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:19:59.254035   30817 main.go:141] libmachine: (ha-565881) Ensuring network default is active
	I0717 00:19:59.254266   30817 main.go:141] libmachine: (ha-565881) Ensuring network mk-ha-565881 is active
	I0717 00:19:59.254684   30817 main.go:141] libmachine: (ha-565881) Getting domain xml...
	I0717 00:19:59.255485   30817 main.go:141] libmachine: (ha-565881) Creating domain...
	I0717 00:20:00.439716   30817 main.go:141] libmachine: (ha-565881) Waiting to get IP...
	I0717 00:20:00.440504   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:00.440831   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:00.440857   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:00.440813   30840 retry.go:31] will retry after 279.96745ms: waiting for machine to come up
	I0717 00:20:00.722294   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:00.722799   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:00.722825   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:00.722742   30840 retry.go:31] will retry after 319.661574ms: waiting for machine to come up
	I0717 00:20:01.045618   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:01.046162   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:01.046190   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:01.046102   30840 retry.go:31] will retry after 366.795432ms: waiting for machine to come up
	I0717 00:20:01.414622   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:01.415055   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:01.415078   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:01.415021   30840 retry.go:31] will retry after 561.296643ms: waiting for machine to come up
	I0717 00:20:01.977961   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:01.978449   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:01.978477   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:01.978405   30840 retry.go:31] will retry after 517.966337ms: waiting for machine to come up
	I0717 00:20:02.498132   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:02.498673   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:02.498694   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:02.498647   30840 retry.go:31] will retry after 609.470693ms: waiting for machine to come up
	I0717 00:20:03.109589   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:03.109946   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:03.109980   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:03.109917   30840 retry.go:31] will retry after 917.846378ms: waiting for machine to come up
	I0717 00:20:04.029475   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:04.029926   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:04.029962   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:04.029889   30840 retry.go:31] will retry after 992.674633ms: waiting for machine to come up
	I0717 00:20:05.023753   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:05.024260   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:05.024286   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:05.024220   30840 retry.go:31] will retry after 1.465280494s: waiting for machine to come up
	I0717 00:20:06.492017   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:06.492366   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:06.492397   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:06.492330   30840 retry.go:31] will retry after 2.258281771s: waiting for machine to come up
	I0717 00:20:08.751788   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:08.752306   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:08.752330   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:08.752260   30840 retry.go:31] will retry after 1.924347004s: waiting for machine to come up
	I0717 00:20:10.678814   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:10.679150   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:10.679183   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:10.679106   30840 retry.go:31] will retry after 3.289331366s: waiting for machine to come up
	I0717 00:20:13.970143   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:13.970436   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:13.970476   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:13.970410   30840 retry.go:31] will retry after 2.743570764s: waiting for machine to come up
	I0717 00:20:16.717289   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:16.717628   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:16.717673   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:16.717595   30840 retry.go:31] will retry after 4.080092625s: waiting for machine to come up
	I0717 00:20:20.800532   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:20.800958   30817 main.go:141] libmachine: (ha-565881) Found IP for machine: 192.168.39.238
	I0717 00:20:20.800985   30817 main.go:141] libmachine: (ha-565881) Reserving static IP address...
	I0717 00:20:20.800998   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has current primary IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:20.801368   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find host DHCP lease matching {name: "ha-565881", mac: "52:54:00:ff:f7:b6", ip: "192.168.39.238"} in network mk-ha-565881
	I0717 00:20:20.872223   30817 main.go:141] libmachine: (ha-565881) DBG | Getting to WaitForSSH function...
	I0717 00:20:20.872252   30817 main.go:141] libmachine: (ha-565881) Reserved static IP address: 192.168.39.238
	I0717 00:20:20.872264   30817 main.go:141] libmachine: (ha-565881) Waiting for SSH to be available...
	I0717 00:20:20.874531   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:20.874938   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:20.874970   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:20.875077   30817 main.go:141] libmachine: (ha-565881) DBG | Using SSH client type: external
	I0717 00:20:20.875100   30817 main.go:141] libmachine: (ha-565881) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa (-rw-------)
	I0717 00:20:20.875123   30817 main.go:141] libmachine: (ha-565881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:20:20.875148   30817 main.go:141] libmachine: (ha-565881) DBG | About to run SSH command:
	I0717 00:20:20.875162   30817 main.go:141] libmachine: (ha-565881) DBG | exit 0
	I0717 00:20:21.000487   30817 main.go:141] libmachine: (ha-565881) DBG | SSH cmd err, output: <nil>: 
	I0717 00:20:21.000762   30817 main.go:141] libmachine: (ha-565881) KVM machine creation complete!
	I0717 00:20:21.001109   30817 main.go:141] libmachine: (ha-565881) Calling .GetConfigRaw
	I0717 00:20:21.001770   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:21.002024   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:21.002265   30817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:20:21.002283   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:20:21.003590   30817 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:20:21.003604   30817 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:20:21.003610   30817 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:20:21.003616   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.005956   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.006317   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.006348   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.006423   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.006583   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.006719   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.006873   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.007037   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.007214   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.007226   30817 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:20:21.120128   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:20:21.120163   30817 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:20:21.120175   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.122946   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.123327   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.123350   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.123498   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.123697   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.123845   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.124005   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.124188   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.124354   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.124364   30817 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:20:21.237438   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:20:21.237529   30817 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:20:21.237543   30817 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:20:21.237555   30817 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:20:21.237795   30817 buildroot.go:166] provisioning hostname "ha-565881"
	I0717 00:20:21.237818   30817 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:20:21.238018   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.240425   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.240735   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.240759   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.240925   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.241079   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.241237   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.241337   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.241577   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.241741   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.241755   30817 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881 && echo "ha-565881" | sudo tee /etc/hostname
	I0717 00:20:21.366943   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:20:21.366981   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.369796   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.370176   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.370206   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.370413   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.370613   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.370779   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.370935   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.371087   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.371400   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.371436   30817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:20:21.489980   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:20:21.490007   30817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:20:21.490029   30817 buildroot.go:174] setting up certificates
	I0717 00:20:21.490040   30817 provision.go:84] configureAuth start
	I0717 00:20:21.490051   30817 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:20:21.490431   30817 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:20:21.493171   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.493531   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.493554   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.493744   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.496311   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.496694   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.496717   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.496865   30817 provision.go:143] copyHostCerts
	I0717 00:20:21.496893   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:20:21.496969   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:20:21.496980   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:20:21.497076   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:20:21.497217   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:20:21.497247   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:20:21.497258   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:20:21.497303   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:20:21.497382   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:20:21.497405   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:20:21.497414   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:20:21.497450   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:20:21.497525   30817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881 san=[127.0.0.1 192.168.39.238 ha-565881 localhost minikube]
	I0717 00:20:21.619638   30817 provision.go:177] copyRemoteCerts
	I0717 00:20:21.619692   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:20:21.619715   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.622265   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.622627   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.622660   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.622817   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.623029   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.623195   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.623349   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:21.707053   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:20:21.707136   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:20:21.731617   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:20:21.731688   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:20:21.756115   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:20:21.756182   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:20:21.779347   30817 provision.go:87] duration metric: took 289.296091ms to configureAuth
	I0717 00:20:21.779370   30817 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:20:21.779548   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:20:21.779614   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.782086   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.782387   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.782424   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.782566   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.782786   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.782972   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.783125   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.783259   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.783429   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.783451   30817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:20:22.065032   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:20:22.065059   30817 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:20:22.065079   30817 main.go:141] libmachine: (ha-565881) Calling .GetURL
	I0717 00:20:22.066557   30817 main.go:141] libmachine: (ha-565881) DBG | Using libvirt version 6000000
	I0717 00:20:22.068726   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.069010   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.069039   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.069173   30817 main.go:141] libmachine: Docker is up and running!
	I0717 00:20:22.069190   30817 main.go:141] libmachine: Reticulating splines...
	I0717 00:20:22.069197   30817 client.go:171] duration metric: took 23.244551778s to LocalClient.Create
	I0717 00:20:22.069221   30817 start.go:167] duration metric: took 23.244608294s to libmachine.API.Create "ha-565881"
	I0717 00:20:22.069232   30817 start.go:293] postStartSetup for "ha-565881" (driver="kvm2")
	I0717 00:20:22.069241   30817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:20:22.069270   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.069550   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:20:22.069572   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:22.071733   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.071977   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.072000   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.072161   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:22.072350   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.072519   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:22.072687   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:22.159752   30817 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:20:22.163990   30817 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:20:22.164010   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:20:22.164064   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:20:22.164149   30817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:20:22.164156   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:20:22.164247   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:20:22.173941   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:20:22.197521   30817 start.go:296] duration metric: took 128.276491ms for postStartSetup
	I0717 00:20:22.197568   30817 main.go:141] libmachine: (ha-565881) Calling .GetConfigRaw
	I0717 00:20:22.198123   30817 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:20:22.200694   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.200990   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.201021   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.201240   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:20:22.201442   30817 start.go:128] duration metric: took 23.39391468s to createHost
	I0717 00:20:22.201463   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:22.203691   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.204165   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.204185   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.204226   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:22.204417   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.204598   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.204709   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:22.204884   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:22.205047   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:22.205077   30817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:20:22.317174   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721175622.292495790
	
	I0717 00:20:22.317197   30817 fix.go:216] guest clock: 1721175622.292495790
	I0717 00:20:22.317206   30817 fix.go:229] Guest: 2024-07-17 00:20:22.29249579 +0000 UTC Remote: 2024-07-17 00:20:22.201454346 +0000 UTC m=+23.494146658 (delta=91.041444ms)
	I0717 00:20:22.317247   30817 fix.go:200] guest clock delta is within tolerance: 91.041444ms
	I0717 00:20:22.317254   30817 start.go:83] releasing machines lock for "ha-565881", held for 23.509792724s
	I0717 00:20:22.317280   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.317560   30817 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:20:22.320411   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.320783   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.320828   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.320988   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.321419   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.321564   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.321641   30817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:20:22.321688   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:22.321746   30817 ssh_runner.go:195] Run: cat /version.json
	I0717 00:20:22.321769   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:22.323939   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.324287   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.324313   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.324338   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.324467   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:22.324659   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.324744   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.324770   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.324794   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:22.324884   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:22.324955   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:22.325040   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.325209   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:22.325333   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:22.426318   30817 ssh_runner.go:195] Run: systemctl --version
	I0717 00:20:22.432459   30817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:20:22.588650   30817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:20:22.595562   30817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:20:22.595625   30817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:20:22.611569   30817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:20:22.611594   30817 start.go:495] detecting cgroup driver to use...
	I0717 00:20:22.611664   30817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:20:22.628915   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:20:22.642572   30817 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:20:22.642621   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:20:22.655563   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:20:22.668534   30817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:20:22.778146   30817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:20:22.943243   30817 docker.go:233] disabling docker service ...
	I0717 00:20:22.943313   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:20:22.965917   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:20:22.978471   30817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:20:23.098504   30817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:20:23.206963   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:20:23.220162   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:20:23.238772   30817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:20:23.238852   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.249269   30817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:20:23.249332   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.259773   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.269559   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.279583   30817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:20:23.289368   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.299052   30817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.316677   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.327278   30817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:20:23.336270   30817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:20:23.336328   30817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:20:23.349361   30817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:20:23.358454   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:20:23.470512   30817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:20:23.607031   30817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:20:23.607093   30817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:20:23.612093   30817 start.go:563] Will wait 60s for crictl version
	I0717 00:20:23.612169   30817 ssh_runner.go:195] Run: which crictl
	I0717 00:20:23.615996   30817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:20:23.653503   30817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:20:23.653582   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:20:23.680930   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:20:23.711136   30817 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:20:23.712435   30817 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:20:23.715351   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:23.715819   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:23.715842   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:23.716106   30817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:20:23.720427   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:20:23.734557   30817 kubeadm.go:883] updating cluster {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:20:23.734686   30817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:20:23.734747   30817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:20:23.767825   30817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 00:20:23.767888   30817 ssh_runner.go:195] Run: which lz4
	I0717 00:20:23.771651   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 00:20:23.771733   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 00:20:23.775721   30817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 00:20:23.775742   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 00:20:25.129825   30817 crio.go:462] duration metric: took 1.358114684s to copy over tarball
	I0717 00:20:25.129913   30817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 00:20:27.228863   30817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.098922659s)
	I0717 00:20:27.228889   30817 crio.go:469] duration metric: took 2.099034446s to extract the tarball
	I0717 00:20:27.228898   30817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 00:20:27.267708   30817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:20:27.314764   30817 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:20:27.314788   30817 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:20:27.314796   30817 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.2 crio true true} ...
	I0717 00:20:27.314905   30817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:20:27.314968   30817 ssh_runner.go:195] Run: crio config
	I0717 00:20:27.358528   30817 cni.go:84] Creating CNI manager for ""
	I0717 00:20:27.358555   30817 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 00:20:27.358566   30817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:20:27.358588   30817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565881 NodeName:ha-565881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:20:27.358720   30817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565881"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:20:27.358741   30817 kube-vip.go:115] generating kube-vip config ...
	I0717 00:20:27.358783   30817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:20:27.375274   30817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:20:27.375387   30817 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:20:27.375441   30817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:20:27.385362   30817 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:20:27.385428   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:20:27.394951   30817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 00:20:27.411402   30817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:20:27.428532   30817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 00:20:27.444909   30817 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0717 00:20:27.460904   30817 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:20:27.464763   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:20:27.477002   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:20:27.607936   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:20:27.626049   30817 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.238
	I0717 00:20:27.626074   30817 certs.go:194] generating shared ca certs ...
	I0717 00:20:27.626093   30817 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:27.626252   30817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:20:27.626306   30817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:20:27.626319   30817 certs.go:256] generating profile certs ...
	I0717 00:20:27.626422   30817 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:20:27.626453   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt with IP's: []
	I0717 00:20:27.920724   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt ...
	I0717 00:20:27.920749   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt: {Name:mk5d1137087700efa0f3abecf8f2e2e63a2bbf92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:27.920907   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key ...
	I0717 00:20:27.920918   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key: {Name:mk637fa6caecf24ee3b93c51fdb89fafa5939ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:27.920988   30817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.61cc86ec
	I0717 00:20:27.921001   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.61cc86ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.254]
	I0717 00:20:28.103272   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.61cc86ec ...
	I0717 00:20:28.103300   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.61cc86ec: {Name:mk579d14b971844df09f8ab5aeaf81190afa9f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:28.103452   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.61cc86ec ...
	I0717 00:20:28.103464   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.61cc86ec: {Name:mk76b1ccb949508d4fd35d54e3f9bf659d7656aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:28.103528   30817 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.61cc86ec -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:20:28.103619   30817 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.61cc86ec -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:20:28.103683   30817 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:20:28.103697   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt with IP's: []
	I0717 00:20:28.212939   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt ...
	I0717 00:20:28.212964   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt: {Name:mk0c4fe949694602f58bd41c63de8ede692cca0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:28.213106   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key ...
	I0717 00:20:28.213116   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key: {Name:mk99c0650071c42da3360e314f055c42b03db4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:28.213231   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:20:28.213255   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:20:28.213269   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:20:28.213283   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:20:28.213295   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:20:28.213309   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:20:28.213318   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:20:28.213330   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:20:28.213376   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:20:28.213407   30817 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:20:28.213416   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:20:28.213437   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:20:28.213458   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:20:28.213478   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:20:28.213515   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:20:28.213544   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:20:28.213555   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:20:28.213563   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:20:28.214069   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:20:28.240174   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:20:28.263734   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:20:28.287105   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:20:28.309725   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 00:20:28.332322   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:20:28.355111   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:20:28.379763   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:20:28.404567   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:20:28.429728   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:20:28.459960   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:20:28.482807   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:20:28.499548   30817 ssh_runner.go:195] Run: openssl version
	I0717 00:20:28.505411   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:20:28.516086   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:20:28.520576   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:20:28.520626   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:20:28.526482   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:20:28.537516   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:20:28.548573   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:20:28.552891   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:20:28.552932   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:20:28.558694   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:20:28.569433   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:20:28.580351   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:20:28.584755   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:20:28.584804   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:20:28.590170   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:20:28.601089   30817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:20:28.604969   30817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:20:28.605021   30817 kubeadm.go:392] StartCluster: {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:20:28.605110   30817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:20:28.605173   30817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:20:28.639951   30817 cri.go:89] found id: ""
	I0717 00:20:28.640015   30817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:20:28.651560   30817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:20:28.663297   30817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:20:28.674942   30817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:20:28.674964   30817 kubeadm.go:157] found existing configuration files:
	
	I0717 00:20:28.675006   30817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:20:28.683959   30817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:20:28.684041   30817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:20:28.693871   30817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:20:28.703291   30817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:20:28.703358   30817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:20:28.713021   30817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:20:28.722076   30817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:20:28.722158   30817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:20:28.731747   30817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:20:28.740446   30817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:20:28.740494   30817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:20:28.749690   30817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 00:20:29.001381   30817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:20:40.208034   30817 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:20:40.208141   30817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:20:40.208255   30817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:20:40.208345   30817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:20:40.208468   30817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:20:40.208531   30817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:20:40.210142   30817 out.go:204]   - Generating certificates and keys ...
	I0717 00:20:40.210233   30817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:20:40.210305   30817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:20:40.210370   30817 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:20:40.210452   30817 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:20:40.210530   30817 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:20:40.210601   30817 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:20:40.210688   30817 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:20:40.210845   30817 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-565881 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0717 00:20:40.210929   30817 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:20:40.211071   30817 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-565881 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0717 00:20:40.211146   30817 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:20:40.211240   30817 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:20:40.211328   30817 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:20:40.211401   30817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:20:40.211463   30817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:20:40.211516   30817 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:20:40.211563   30817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:20:40.211622   30817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:20:40.211674   30817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:20:40.211752   30817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:20:40.211810   30817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:20:40.213891   30817 out.go:204]   - Booting up control plane ...
	I0717 00:20:40.213973   30817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:20:40.214042   30817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:20:40.214102   30817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:20:40.214198   30817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:20:40.214279   30817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:20:40.214313   30817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:20:40.214465   30817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:20:40.214557   30817 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:20:40.214618   30817 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.479451ms
	I0717 00:20:40.214702   30817 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:20:40.214757   30817 kubeadm.go:310] [api-check] The API server is healthy after 6.085629153s
	I0717 00:20:40.214852   30817 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:20:40.214978   30817 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:20:40.215030   30817 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:20:40.215187   30817 kubeadm.go:310] [mark-control-plane] Marking the node ha-565881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:20:40.215237   30817 kubeadm.go:310] [bootstrap-token] Using token: 5t00n9.la7matfwtmym5d6q
	I0717 00:20:40.216480   30817 out.go:204]   - Configuring RBAC rules ...
	I0717 00:20:40.216623   30817 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:20:40.216726   30817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:20:40.216882   30817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:20:40.217025   30817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:20:40.217157   30817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:20:40.217252   30817 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:20:40.217351   30817 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:20:40.217419   30817 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:20:40.217470   30817 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:20:40.217477   30817 kubeadm.go:310] 
	I0717 00:20:40.217525   30817 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:20:40.217530   30817 kubeadm.go:310] 
	I0717 00:20:40.217595   30817 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:20:40.217600   30817 kubeadm.go:310] 
	I0717 00:20:40.217637   30817 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:20:40.217718   30817 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:20:40.217790   30817 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:20:40.217797   30817 kubeadm.go:310] 
	I0717 00:20:40.217841   30817 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:20:40.217846   30817 kubeadm.go:310] 
	I0717 00:20:40.217891   30817 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:20:40.217899   30817 kubeadm.go:310] 
	I0717 00:20:40.217941   30817 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:20:40.218007   30817 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:20:40.218089   30817 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:20:40.218098   30817 kubeadm.go:310] 
	I0717 00:20:40.218200   30817 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:20:40.218276   30817 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:20:40.218283   30817 kubeadm.go:310] 
	I0717 00:20:40.218388   30817 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5t00n9.la7matfwtmym5d6q \
	I0717 00:20:40.218488   30817 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 00:20:40.218513   30817 kubeadm.go:310] 	--control-plane 
	I0717 00:20:40.218518   30817 kubeadm.go:310] 
	I0717 00:20:40.218596   30817 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:20:40.218604   30817 kubeadm.go:310] 
	I0717 00:20:40.218678   30817 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5t00n9.la7matfwtmym5d6q \
	I0717 00:20:40.218791   30817 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 00:20:40.218805   30817 cni.go:84] Creating CNI manager for ""
	I0717 00:20:40.218812   30817 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 00:20:40.220276   30817 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 00:20:40.221441   30817 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 00:20:40.226975   30817 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 00:20:40.226989   30817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 00:20:40.248809   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 00:20:40.613999   30817 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:20:40.614080   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:40.614080   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565881 minikube.k8s.io/updated_at=2024_07_17T00_20_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-565881 minikube.k8s.io/primary=true
	I0717 00:20:40.833632   30817 ops.go:34] apiserver oom_adj: -16
	I0717 00:20:40.858085   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:41.359069   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:41.858639   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:42.358240   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:42.858267   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:43.358581   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:43.858396   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:44.358158   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:44.858731   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:45.359005   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:45.858396   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:46.358378   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:46.858426   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:47.358949   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:47.858961   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:48.358729   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:48.858281   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:49.358411   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:49.858416   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:50.358790   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:50.858531   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:51.358355   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:51.859038   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:52.358185   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:52.858913   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:52.974829   30817 kubeadm.go:1113] duration metric: took 12.360814361s to wait for elevateKubeSystemPrivileges
	I0717 00:20:52.974870   30817 kubeadm.go:394] duration metric: took 24.369853057s to StartCluster
	I0717 00:20:52.974893   30817 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:52.974971   30817 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:20:52.975840   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:52.976081   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:20:52.976094   30817 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:20:52.976119   30817 start.go:241] waiting for startup goroutines ...
	I0717 00:20:52.976132   30817 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 00:20:52.976192   30817 addons.go:69] Setting storage-provisioner=true in profile "ha-565881"
	I0717 00:20:52.976204   30817 addons.go:69] Setting default-storageclass=true in profile "ha-565881"
	I0717 00:20:52.976220   30817 addons.go:234] Setting addon storage-provisioner=true in "ha-565881"
	I0717 00:20:52.976241   30817 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565881"
	I0717 00:20:52.976251   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:20:52.976299   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:20:52.976675   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:52.976681   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:52.976699   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:52.976709   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:52.991476   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0717 00:20:52.991807   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40481
	I0717 00:20:52.991972   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:52.992149   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:52.992518   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:52.992538   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:52.992651   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:52.992670   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:52.992846   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:52.992999   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:52.993190   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:20:52.993376   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:52.993406   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:52.995211   30817 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:20:52.995468   30817 kapi.go:59] client config for ha-565881: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 00:20:52.995878   30817 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 00:20:52.996010   30817 addons.go:234] Setting addon default-storageclass=true in "ha-565881"
	I0717 00:20:52.996047   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:20:52.996298   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:52.996340   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:53.008910   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0717 00:20:53.009338   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:53.009855   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:53.009880   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:53.010232   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:53.010472   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:20:53.012004   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:53.012110   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0717 00:20:53.012463   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:53.012961   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:53.012979   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:53.013300   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:53.013829   30817 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:20:53.013854   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:53.013873   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:53.015189   30817 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:20:53.015207   30817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:20:53.015224   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:53.018311   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:53.018722   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:53.018742   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:53.018886   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:53.019064   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:53.019207   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:53.019431   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:53.028309   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44659
	I0717 00:20:53.028842   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:53.029343   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:53.029364   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:53.029650   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:53.029819   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:20:53.031187   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:53.031361   30817 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:20:53.031371   30817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:20:53.031387   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:53.033813   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:53.034139   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:53.034165   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:53.034401   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:53.034547   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:53.034672   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:53.034805   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:53.129181   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:20:53.178168   30817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:20:53.212619   30817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:20:53.633820   30817 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 00:20:53.936393   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.936421   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.936475   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.936494   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.936776   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.936792   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.936800   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.936869   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.937420   30817 main.go:141] libmachine: (ha-565881) DBG | Closing plugin on server side
	I0717 00:20:53.937475   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.937510   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.937524   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.937542   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.937555   30817 main.go:141] libmachine: (ha-565881) DBG | Closing plugin on server side
	I0717 00:20:53.937571   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.937601   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.939327   30817 main.go:141] libmachine: (ha-565881) DBG | Closing plugin on server side
	I0717 00:20:53.939365   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.939380   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.939513   30817 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 00:20:53.939526   30817 round_trippers.go:469] Request Headers:
	I0717 00:20:53.939536   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:20:53.939546   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:20:53.951855   30817 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0717 00:20:53.952521   30817 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 00:20:53.952540   30817 round_trippers.go:469] Request Headers:
	I0717 00:20:53.952550   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:20:53.952601   30817 round_trippers.go:473]     Content-Type: application/json
	I0717 00:20:53.952609   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:20:53.955743   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:20:53.955934   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.955954   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.956244   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.956272   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.957791   30817 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 00:20:53.959258   30817 addons.go:510] duration metric: took 983.123512ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 00:20:53.959291   30817 start.go:246] waiting for cluster config update ...
	I0717 00:20:53.959306   30817 start.go:255] writing updated cluster config ...
	I0717 00:20:53.961199   30817 out.go:177] 
	I0717 00:20:53.962649   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:20:53.962714   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:20:53.964434   30817 out.go:177] * Starting "ha-565881-m02" control-plane node in "ha-565881" cluster
	I0717 00:20:53.965802   30817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:20:53.965826   30817 cache.go:56] Caching tarball of preloaded images
	I0717 00:20:53.965911   30817 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:20:53.965922   30817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:20:53.965987   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:20:53.966147   30817 start.go:360] acquireMachinesLock for ha-565881-m02: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:20:53.966185   30817 start.go:364] duration metric: took 20.851µs to acquireMachinesLock for "ha-565881-m02"
	I0717 00:20:53.966201   30817 start.go:93] Provisioning new machine with config: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:20:53.966271   30817 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 00:20:53.967815   30817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:20:53.967898   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:53.967928   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:53.982260   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0717 00:20:53.982677   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:53.983168   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:53.983203   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:53.983562   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:53.983765   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetMachineName
	I0717 00:20:53.983929   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:20:53.984160   30817 start.go:159] libmachine.API.Create for "ha-565881" (driver="kvm2")
	I0717 00:20:53.984194   30817 client.go:168] LocalClient.Create starting
	I0717 00:20:53.984229   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 00:20:53.984270   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:20:53.984290   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:20:53.984353   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 00:20:53.984378   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:20:53.984395   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:20:53.984419   30817 main.go:141] libmachine: Running pre-create checks...
	I0717 00:20:53.984429   30817 main.go:141] libmachine: (ha-565881-m02) Calling .PreCreateCheck
	I0717 00:20:53.984638   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetConfigRaw
	I0717 00:20:53.985083   30817 main.go:141] libmachine: Creating machine...
	I0717 00:20:53.985101   30817 main.go:141] libmachine: (ha-565881-m02) Calling .Create
	I0717 00:20:53.985244   30817 main.go:141] libmachine: (ha-565881-m02) Creating KVM machine...
	I0717 00:20:53.986591   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found existing default KVM network
	I0717 00:20:53.986772   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found existing private KVM network mk-ha-565881
	I0717 00:20:53.986915   30817 main.go:141] libmachine: (ha-565881-m02) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02 ...
	I0717 00:20:53.986939   30817 main.go:141] libmachine: (ha-565881-m02) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:20:53.986993   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:53.986884   31210 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:20:53.987068   30817 main.go:141] libmachine: (ha-565881-m02) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 00:20:54.229268   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:54.229137   31210 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa...
	I0717 00:20:54.481989   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:54.481836   31210 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/ha-565881-m02.rawdisk...
	I0717 00:20:54.482060   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Writing magic tar header
	I0717 00:20:54.482079   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Writing SSH key tar header
	I0717 00:20:54.482095   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:54.481977   31210 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02 ...
	I0717 00:20:54.482166   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02 (perms=drwx------)
	I0717 00:20:54.482185   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02
	I0717 00:20:54.482206   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 00:20:54.482222   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:20:54.482253   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:20:54.482274   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 00:20:54.482284   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 00:20:54.482299   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 00:20:54.482310   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:20:54.482320   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:20:54.482333   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:20:54.482344   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:20:54.482352   30817 main.go:141] libmachine: (ha-565881-m02) Creating domain...
	I0717 00:20:54.482368   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home
	I0717 00:20:54.482378   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Skipping /home - not owner
	I0717 00:20:54.483304   30817 main.go:141] libmachine: (ha-565881-m02) define libvirt domain using xml: 
	I0717 00:20:54.483324   30817 main.go:141] libmachine: (ha-565881-m02) <domain type='kvm'>
	I0717 00:20:54.483335   30817 main.go:141] libmachine: (ha-565881-m02)   <name>ha-565881-m02</name>
	I0717 00:20:54.483342   30817 main.go:141] libmachine: (ha-565881-m02)   <memory unit='MiB'>2200</memory>
	I0717 00:20:54.483352   30817 main.go:141] libmachine: (ha-565881-m02)   <vcpu>2</vcpu>
	I0717 00:20:54.483360   30817 main.go:141] libmachine: (ha-565881-m02)   <features>
	I0717 00:20:54.483372   30817 main.go:141] libmachine: (ha-565881-m02)     <acpi/>
	I0717 00:20:54.483380   30817 main.go:141] libmachine: (ha-565881-m02)     <apic/>
	I0717 00:20:54.483390   30817 main.go:141] libmachine: (ha-565881-m02)     <pae/>
	I0717 00:20:54.483400   30817 main.go:141] libmachine: (ha-565881-m02)     
	I0717 00:20:54.483410   30817 main.go:141] libmachine: (ha-565881-m02)   </features>
	I0717 00:20:54.483421   30817 main.go:141] libmachine: (ha-565881-m02)   <cpu mode='host-passthrough'>
	I0717 00:20:54.483464   30817 main.go:141] libmachine: (ha-565881-m02)   
	I0717 00:20:54.483497   30817 main.go:141] libmachine: (ha-565881-m02)   </cpu>
	I0717 00:20:54.483510   30817 main.go:141] libmachine: (ha-565881-m02)   <os>
	I0717 00:20:54.483520   30817 main.go:141] libmachine: (ha-565881-m02)     <type>hvm</type>
	I0717 00:20:54.483530   30817 main.go:141] libmachine: (ha-565881-m02)     <boot dev='cdrom'/>
	I0717 00:20:54.483541   30817 main.go:141] libmachine: (ha-565881-m02)     <boot dev='hd'/>
	I0717 00:20:54.483572   30817 main.go:141] libmachine: (ha-565881-m02)     <bootmenu enable='no'/>
	I0717 00:20:54.483597   30817 main.go:141] libmachine: (ha-565881-m02)   </os>
	I0717 00:20:54.483607   30817 main.go:141] libmachine: (ha-565881-m02)   <devices>
	I0717 00:20:54.483617   30817 main.go:141] libmachine: (ha-565881-m02)     <disk type='file' device='cdrom'>
	I0717 00:20:54.483632   30817 main.go:141] libmachine: (ha-565881-m02)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/boot2docker.iso'/>
	I0717 00:20:54.483643   30817 main.go:141] libmachine: (ha-565881-m02)       <target dev='hdc' bus='scsi'/>
	I0717 00:20:54.483655   30817 main.go:141] libmachine: (ha-565881-m02)       <readonly/>
	I0717 00:20:54.483668   30817 main.go:141] libmachine: (ha-565881-m02)     </disk>
	I0717 00:20:54.483687   30817 main.go:141] libmachine: (ha-565881-m02)     <disk type='file' device='disk'>
	I0717 00:20:54.483705   30817 main.go:141] libmachine: (ha-565881-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:20:54.483734   30817 main.go:141] libmachine: (ha-565881-m02)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/ha-565881-m02.rawdisk'/>
	I0717 00:20:54.483754   30817 main.go:141] libmachine: (ha-565881-m02)       <target dev='hda' bus='virtio'/>
	I0717 00:20:54.483767   30817 main.go:141] libmachine: (ha-565881-m02)     </disk>
	I0717 00:20:54.483778   30817 main.go:141] libmachine: (ha-565881-m02)     <interface type='network'>
	I0717 00:20:54.483790   30817 main.go:141] libmachine: (ha-565881-m02)       <source network='mk-ha-565881'/>
	I0717 00:20:54.483804   30817 main.go:141] libmachine: (ha-565881-m02)       <model type='virtio'/>
	I0717 00:20:54.483816   30817 main.go:141] libmachine: (ha-565881-m02)     </interface>
	I0717 00:20:54.483828   30817 main.go:141] libmachine: (ha-565881-m02)     <interface type='network'>
	I0717 00:20:54.483838   30817 main.go:141] libmachine: (ha-565881-m02)       <source network='default'/>
	I0717 00:20:54.483847   30817 main.go:141] libmachine: (ha-565881-m02)       <model type='virtio'/>
	I0717 00:20:54.483856   30817 main.go:141] libmachine: (ha-565881-m02)     </interface>
	I0717 00:20:54.483864   30817 main.go:141] libmachine: (ha-565881-m02)     <serial type='pty'>
	I0717 00:20:54.483879   30817 main.go:141] libmachine: (ha-565881-m02)       <target port='0'/>
	I0717 00:20:54.483891   30817 main.go:141] libmachine: (ha-565881-m02)     </serial>
	I0717 00:20:54.483902   30817 main.go:141] libmachine: (ha-565881-m02)     <console type='pty'>
	I0717 00:20:54.483913   30817 main.go:141] libmachine: (ha-565881-m02)       <target type='serial' port='0'/>
	I0717 00:20:54.483921   30817 main.go:141] libmachine: (ha-565881-m02)     </console>
	I0717 00:20:54.483930   30817 main.go:141] libmachine: (ha-565881-m02)     <rng model='virtio'>
	I0717 00:20:54.483941   30817 main.go:141] libmachine: (ha-565881-m02)       <backend model='random'>/dev/random</backend>
	I0717 00:20:54.483950   30817 main.go:141] libmachine: (ha-565881-m02)     </rng>
	I0717 00:20:54.483965   30817 main.go:141] libmachine: (ha-565881-m02)     
	I0717 00:20:54.483981   30817 main.go:141] libmachine: (ha-565881-m02)     
	I0717 00:20:54.483994   30817 main.go:141] libmachine: (ha-565881-m02)   </devices>
	I0717 00:20:54.484004   30817 main.go:141] libmachine: (ha-565881-m02) </domain>
	I0717 00:20:54.484038   30817 main.go:141] libmachine: (ha-565881-m02) 
	I0717 00:20:54.490515   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:7e:66:70 in network default
	I0717 00:20:54.491158   30817 main.go:141] libmachine: (ha-565881-m02) Ensuring networks are active...
	I0717 00:20:54.491184   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:54.491861   30817 main.go:141] libmachine: (ha-565881-m02) Ensuring network default is active
	I0717 00:20:54.492173   30817 main.go:141] libmachine: (ha-565881-m02) Ensuring network mk-ha-565881 is active
	I0717 00:20:54.492634   30817 main.go:141] libmachine: (ha-565881-m02) Getting domain xml...
	I0717 00:20:54.493403   30817 main.go:141] libmachine: (ha-565881-m02) Creating domain...
	I0717 00:20:55.752481   30817 main.go:141] libmachine: (ha-565881-m02) Waiting to get IP...
	I0717 00:20:55.753160   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:55.753591   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:55.753634   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:55.753577   31210 retry.go:31] will retry after 269.169887ms: waiting for machine to come up
	I0717 00:20:56.024001   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:56.024486   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:56.024521   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:56.024457   31210 retry.go:31] will retry after 235.250326ms: waiting for machine to come up
	I0717 00:20:56.261736   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:56.262142   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:56.262167   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:56.262096   31210 retry.go:31] will retry after 429.39531ms: waiting for machine to come up
	I0717 00:20:56.692788   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:56.693291   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:56.693324   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:56.693235   31210 retry.go:31] will retry after 578.982983ms: waiting for machine to come up
	I0717 00:20:57.273851   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:57.274257   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:57.274286   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:57.274229   31210 retry.go:31] will retry after 494.250759ms: waiting for machine to come up
	I0717 00:20:57.769699   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:57.770127   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:57.770161   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:57.770079   31210 retry.go:31] will retry after 683.010458ms: waiting for machine to come up
	I0717 00:20:58.454732   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:58.455161   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:58.455191   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:58.455111   31210 retry.go:31] will retry after 1.089607359s: waiting for machine to come up
	I0717 00:20:59.546879   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:59.547370   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:59.547416   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:59.547346   31210 retry.go:31] will retry after 1.380186146s: waiting for machine to come up
	I0717 00:21:00.929935   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:00.930446   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:00.930475   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:00.930366   31210 retry.go:31] will retry after 1.248137918s: waiting for machine to come up
	I0717 00:21:02.180983   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:02.181510   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:02.181535   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:02.181457   31210 retry.go:31] will retry after 2.268121621s: waiting for machine to come up
	I0717 00:21:04.451480   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:04.451977   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:04.452008   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:04.451910   31210 retry.go:31] will retry after 2.654411879s: waiting for machine to come up
	I0717 00:21:07.107555   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:07.108046   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:07.108079   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:07.107996   31210 retry.go:31] will retry after 3.432158661s: waiting for machine to come up
	I0717 00:21:10.542527   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:10.542978   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:10.543006   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:10.542923   31210 retry.go:31] will retry after 3.832769057s: waiting for machine to come up
	I0717 00:21:14.376753   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.377156   30817 main.go:141] libmachine: (ha-565881-m02) Found IP for machine: 192.168.39.14
	I0717 00:21:14.377183   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has current primary IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.377192   30817 main.go:141] libmachine: (ha-565881-m02) Reserving static IP address...
	I0717 00:21:14.377514   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find host DHCP lease matching {name: "ha-565881-m02", mac: "52:54:00:10:b5:c3", ip: "192.168.39.14"} in network mk-ha-565881
	I0717 00:21:14.447323   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Getting to WaitForSSH function...
	I0717 00:21:14.447353   30817 main.go:141] libmachine: (ha-565881-m02) Reserved static IP address: 192.168.39.14
	I0717 00:21:14.447365   30817 main.go:141] libmachine: (ha-565881-m02) Waiting for SSH to be available...
	I0717 00:21:14.449994   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.450435   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.450460   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.450620   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Using SSH client type: external
	I0717 00:21:14.450653   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa (-rw-------)
	I0717 00:21:14.450686   30817 main.go:141] libmachine: (ha-565881-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:21:14.450697   30817 main.go:141] libmachine: (ha-565881-m02) DBG | About to run SSH command:
	I0717 00:21:14.450705   30817 main.go:141] libmachine: (ha-565881-m02) DBG | exit 0
	I0717 00:21:14.576658   30817 main.go:141] libmachine: (ha-565881-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 00:21:14.576905   30817 main.go:141] libmachine: (ha-565881-m02) KVM machine creation complete!
	I0717 00:21:14.577174   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetConfigRaw
	I0717 00:21:14.577651   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:14.577864   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:14.577990   30817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:21:14.578004   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:21:14.579238   30817 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:21:14.579254   30817 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:21:14.579260   30817 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:21:14.579266   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:14.581509   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.581847   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.581873   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.582047   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:14.582195   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.582336   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.582472   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:14.582607   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:14.582858   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:14.582883   30817 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:21:14.683852   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:21:14.683880   30817 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:21:14.683889   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:14.686847   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.687236   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.687266   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.687450   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:14.687642   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.687792   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.687911   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:14.688042   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:14.688221   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:14.688234   30817 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:21:14.793548   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:21:14.793644   30817 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:21:14.793659   30817 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:21:14.793673   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetMachineName
	I0717 00:21:14.793986   30817 buildroot.go:166] provisioning hostname "ha-565881-m02"
	I0717 00:21:14.794012   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetMachineName
	I0717 00:21:14.794205   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:14.797055   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.797427   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.797454   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.797665   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:14.797849   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.798030   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.798192   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:14.798356   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:14.798508   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:14.798521   30817 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881-m02 && echo "ha-565881-m02" | sudo tee /etc/hostname
	I0717 00:21:14.915845   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881-m02
	
	I0717 00:21:14.915872   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:14.918674   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.919009   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.919035   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.919218   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:14.919401   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.919611   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.919751   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:14.919905   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:14.920108   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:14.920135   30817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:21:15.039395   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:21:15.039426   30817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:21:15.039443   30817 buildroot.go:174] setting up certificates
	I0717 00:21:15.039453   30817 provision.go:84] configureAuth start
	I0717 00:21:15.039484   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetMachineName
	I0717 00:21:15.039767   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:21:15.042348   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.042651   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.042677   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.042813   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.045027   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.045381   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.045409   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.045512   30817 provision.go:143] copyHostCerts
	I0717 00:21:15.045542   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:21:15.045577   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:21:15.045585   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:21:15.045645   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:21:15.045727   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:21:15.045743   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:21:15.045750   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:21:15.045774   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:21:15.045832   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:21:15.045848   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:21:15.045854   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:21:15.045877   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:21:15.045939   30817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881-m02 san=[127.0.0.1 192.168.39.14 ha-565881-m02 localhost minikube]
	I0717 00:21:15.186326   30817 provision.go:177] copyRemoteCerts
	I0717 00:21:15.186385   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:21:15.186408   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.188981   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.189408   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.189439   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.189612   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.189791   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.189934   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.190080   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:21:15.270806   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:21:15.270866   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:21:15.295339   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:21:15.295409   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:21:15.324354   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:21:15.324424   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:21:15.349737   30817 provision.go:87] duration metric: took 310.27257ms to configureAuth
	I0717 00:21:15.349762   30817 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:21:15.349935   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:21:15.350020   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.352329   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.352623   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.352648   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.352791   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.352976   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.353139   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.353294   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.353496   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:15.353640   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:15.353654   30817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:21:15.611222   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:21:15.611252   30817 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:21:15.611264   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetURL
	I0717 00:21:15.612630   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Using libvirt version 6000000
	I0717 00:21:15.614528   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.614863   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.614890   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.615037   30817 main.go:141] libmachine: Docker is up and running!
	I0717 00:21:15.615053   30817 main.go:141] libmachine: Reticulating splines...
	I0717 00:21:15.615061   30817 client.go:171] duration metric: took 21.630857353s to LocalClient.Create
	I0717 00:21:15.615086   30817 start.go:167] duration metric: took 21.630927441s to libmachine.API.Create "ha-565881"
	I0717 00:21:15.615096   30817 start.go:293] postStartSetup for "ha-565881-m02" (driver="kvm2")
	I0717 00:21:15.615107   30817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:21:15.615133   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.615356   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:21:15.615380   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.617451   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.617831   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.617858   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.617983   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.618161   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.618333   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.618475   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:21:15.698806   30817 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:21:15.702981   30817 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:21:15.703007   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:21:15.703066   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:21:15.703153   30817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:21:15.703165   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:21:15.703274   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:21:15.712902   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:21:15.737145   30817 start.go:296] duration metric: took 122.012784ms for postStartSetup
	I0717 00:21:15.737237   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetConfigRaw
	I0717 00:21:15.737846   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:21:15.740271   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.740683   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.740715   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.740945   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:21:15.741163   30817 start.go:128] duration metric: took 21.774880748s to createHost
	I0717 00:21:15.741192   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.743833   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.744253   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.744292   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.744498   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.744671   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.744822   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.744971   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.745097   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:15.745252   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:15.745261   30817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:21:15.849161   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721175675.807023074
	
	I0717 00:21:15.849195   30817 fix.go:216] guest clock: 1721175675.807023074
	I0717 00:21:15.849205   30817 fix.go:229] Guest: 2024-07-17 00:21:15.807023074 +0000 UTC Remote: 2024-07-17 00:21:15.741179027 +0000 UTC m=+77.033871343 (delta=65.844047ms)
	I0717 00:21:15.849224   30817 fix.go:200] guest clock delta is within tolerance: 65.844047ms
	I0717 00:21:15.849229   30817 start.go:83] releasing machines lock for "ha-565881-m02", held for 21.883035485s
	I0717 00:21:15.849246   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.849521   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:21:15.851948   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.852298   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.852326   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.854403   30817 out.go:177] * Found network options:
	I0717 00:21:15.855745   30817 out.go:177]   - NO_PROXY=192.168.39.238
	W0717 00:21:15.857061   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:21:15.857088   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.857574   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.857768   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.857874   30817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:21:15.857915   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	W0717 00:21:15.857996   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:21:15.858072   30817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:21:15.858092   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.860570   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.860897   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.860923   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.860984   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.861048   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.861196   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.861337   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.861477   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.861489   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:21:15.861499   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.861652   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.861786   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.861950   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.862093   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:21:16.098712   30817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:21:16.105464   30817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:21:16.105534   30817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:21:16.122753   30817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:21:16.122781   30817 start.go:495] detecting cgroup driver to use...
	I0717 00:21:16.122839   30817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:21:16.138274   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:21:16.152974   30817 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:21:16.153036   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:21:16.167520   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:21:16.181000   30817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:21:16.302425   30817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:21:16.450852   30817 docker.go:233] disabling docker service ...
	I0717 00:21:16.450912   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:21:16.465317   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:21:16.478214   30817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:21:16.621899   30817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:21:16.753063   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:21:16.767162   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:21:16.785485   30817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:21:16.785551   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.796724   30817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:21:16.796797   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.807450   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.817799   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.830141   30817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:21:16.841132   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.851542   30817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.868104   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.877936   30817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:21:16.886919   30817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:21:16.886972   30817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:21:16.899553   30817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:21:16.908759   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:21:17.021904   30817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:21:17.156470   30817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:21:17.156547   30817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:21:17.161101   30817 start.go:563] Will wait 60s for crictl version
	I0717 00:21:17.161152   30817 ssh_runner.go:195] Run: which crictl
	I0717 00:21:17.165085   30817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:21:17.209004   30817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:21:17.209083   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:21:17.239861   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:21:17.268366   30817 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:21:17.269688   30817 out.go:177]   - env NO_PROXY=192.168.39.238
	I0717 00:21:17.270947   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:21:17.273446   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:17.273808   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:17.273837   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:17.274003   30817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:21:17.278302   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:21:17.291208   30817 mustload.go:65] Loading cluster: ha-565881
	I0717 00:21:17.291377   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:21:17.291612   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:21:17.291634   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:21:17.307255   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0717 00:21:17.307672   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:21:17.308186   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:21:17.308204   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:21:17.308512   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:21:17.308738   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:21:17.310197   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:21:17.310480   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:21:17.310507   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:21:17.326099   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0717 00:21:17.326523   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:21:17.326981   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:21:17.327001   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:21:17.327299   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:21:17.327460   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:21:17.327611   30817 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.14
	I0717 00:21:17.327622   30817 certs.go:194] generating shared ca certs ...
	I0717 00:21:17.327635   30817 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:21:17.327744   30817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:21:17.327781   30817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:21:17.327789   30817 certs.go:256] generating profile certs ...
	I0717 00:21:17.327848   30817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:21:17.327872   30817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.edd24c54
	I0717 00:21:17.327886   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.edd24c54 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.14 192.168.39.254]
	I0717 00:21:17.466680   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.edd24c54 ...
	I0717 00:21:17.466707   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.edd24c54: {Name:mkca826e3a25ad9472bf780c9aff1b7a7706746f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:21:17.466893   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.edd24c54 ...
	I0717 00:21:17.466909   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.edd24c54: {Name:mke091d01f37b34ad0115442b7381ff6068562db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:21:17.467003   30817 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.edd24c54 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:21:17.467242   30817 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.edd24c54 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:21:17.467495   30817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:21:17.467511   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:21:17.467525   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:21:17.467538   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:21:17.467549   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:21:17.467561   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:21:17.467572   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:21:17.467583   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:21:17.467595   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:21:17.467644   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:21:17.467671   30817 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:21:17.467680   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:21:17.467702   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:21:17.467722   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:21:17.467742   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:21:17.467775   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:21:17.467801   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:21:17.467815   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:21:17.467826   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:21:17.467854   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:21:17.470912   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:21:17.471291   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:21:17.471317   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:21:17.471473   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:21:17.471667   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:21:17.471811   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:21:17.471956   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:21:17.548903   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 00:21:17.555065   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 00:21:17.572563   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 00:21:17.576829   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 00:21:17.587396   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 00:21:17.592366   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 00:21:17.604289   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 00:21:17.608868   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 00:21:17.620021   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 00:21:17.624405   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 00:21:17.634338   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 00:21:17.638306   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 00:21:17.648687   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:21:17.673003   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:21:17.695748   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:21:17.718148   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:21:17.740998   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 00:21:17.764136   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:21:17.787524   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:21:17.811551   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:21:17.837099   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:21:17.861188   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:21:17.885630   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:21:17.909796   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 00:21:17.926407   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 00:21:17.942994   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 00:21:17.959243   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 00:21:17.975591   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 00:21:17.991629   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 00:21:18.007702   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 00:21:18.023715   30817 ssh_runner.go:195] Run: openssl version
	I0717 00:21:18.029407   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:21:18.040935   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:21:18.045390   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:21:18.045439   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:21:18.051062   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:21:18.062586   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:21:18.073376   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:21:18.078357   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:21:18.078419   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:21:18.084215   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:21:18.094970   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:21:18.105447   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:21:18.109794   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:21:18.109838   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:21:18.115321   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:21:18.126082   30817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:21:18.130095   30817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:21:18.130158   30817 kubeadm.go:934] updating node {m02 192.168.39.14 8443 v1.30.2 crio true true} ...
	I0717 00:21:18.130242   30817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:21:18.130262   30817 kube-vip.go:115] generating kube-vip config ...
	I0717 00:21:18.130291   30817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:21:18.153747   30817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:21:18.153826   30817 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:21:18.153919   30817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:21:18.166896   30817 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 00:21:18.166961   30817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 00:21:18.176849   30817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 00:21:18.176877   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:21:18.176955   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:21:18.176992   30817 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0717 00:21:18.177032   30817 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0717 00:21:18.181262   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 00:21:18.181297   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 00:21:18.735720   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:21:18.735801   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:21:18.743788   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 00:21:18.743824   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 00:21:19.087166   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:21:19.102558   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:21:19.102657   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:21:19.106839   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 00:21:19.106876   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 00:21:19.518021   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 00:21:19.528996   30817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 00:21:19.546325   30817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:21:19.563343   30817 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:21:19.581500   30817 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:21:19.585989   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:21:19.598455   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:21:19.731813   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:21:19.748573   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:21:19.749022   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:21:19.749076   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:21:19.763910   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0717 00:21:19.764403   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:21:19.764905   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:21:19.764929   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:21:19.765272   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:21:19.765452   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:21:19.765651   30817 start.go:317] joinCluster: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:21:19.765738   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 00:21:19.765762   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:21:19.768616   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:21:19.769076   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:21:19.769101   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:21:19.769316   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:21:19.769489   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:21:19.769643   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:21:19.769796   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:21:19.946621   30817 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:21:19.946669   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9nm4lz.saewglj5gs64tmcu --discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565881-m02 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443"
	I0717 00:21:43.029624   30817 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9nm4lz.saewglj5gs64tmcu --discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565881-m02 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443": (23.082929282s)
	I0717 00:21:43.029658   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 00:21:43.582797   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565881-m02 minikube.k8s.io/updated_at=2024_07_17T00_21_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-565881 minikube.k8s.io/primary=false
	I0717 00:21:43.721990   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565881-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 00:21:43.853465   30817 start.go:319] duration metric: took 24.087809331s to joinCluster
	I0717 00:21:43.853542   30817 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:21:43.853974   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:21:43.855081   30817 out.go:177] * Verifying Kubernetes components...
	I0717 00:21:43.856288   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:21:44.180404   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:21:44.253103   30817 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:21:44.253404   30817 kapi.go:59] client config for ha-565881: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 00:21:44.253462   30817 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.238:8443
	I0717 00:21:44.253707   30817 node_ready.go:35] waiting up to 6m0s for node "ha-565881-m02" to be "Ready" ...
	I0717 00:21:44.253824   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:44.253837   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:44.253848   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:44.253856   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:44.263354   30817 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 00:21:44.754358   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:44.754382   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:44.754394   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:44.754399   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:44.757655   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:45.253959   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:45.253985   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:45.253996   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:45.254001   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:45.265896   30817 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0717 00:21:45.754930   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:45.754954   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:45.754963   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:45.754971   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:45.760499   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:21:46.254468   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:46.254488   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:46.254496   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:46.254501   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:46.258680   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:46.259471   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:46.754803   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:46.754822   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:46.754831   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:46.754837   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:46.758090   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:47.254031   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:47.254065   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:47.254073   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:47.254078   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:47.256739   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:47.754688   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:47.754710   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:47.754718   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:47.754723   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:47.758191   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:48.254475   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:48.254499   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:48.254507   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:48.254513   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:48.258146   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:48.754396   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:48.754416   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:48.754424   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:48.754428   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:48.758447   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:48.759070   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:49.254387   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:49.254411   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:49.254420   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:49.254425   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:49.257800   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:49.754890   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:49.754912   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:49.754925   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:49.754928   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:49.758523   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:50.254296   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:50.254317   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:50.254324   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:50.254330   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:50.257421   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:50.754048   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:50.754070   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:50.754078   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:50.754081   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:50.757489   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:51.254264   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:51.254284   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:51.254292   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:51.254296   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:51.257543   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:51.258151   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:51.754614   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:51.754640   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:51.754651   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:51.754656   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:51.758000   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:52.254060   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:52.254081   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:52.254089   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:52.254094   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:52.257462   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:52.754781   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:52.754802   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:52.754811   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:52.754815   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:52.757846   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:53.254358   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:53.254379   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:53.254388   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:53.254391   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:53.258341   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:53.259618   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:53.754126   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:53.754145   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:53.754152   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:53.754157   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:53.757564   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:54.254041   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:54.254062   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:54.254070   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:54.254074   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:54.257155   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:54.754336   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:54.754357   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:54.754366   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:54.754369   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:54.758497   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:55.253962   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:55.253990   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:55.254000   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:55.254006   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:55.257391   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:55.754572   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:55.754593   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:55.754602   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:55.754607   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:55.757728   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:55.758177   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:56.254691   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:56.254717   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:56.254729   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:56.254736   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:56.257897   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:56.754916   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:56.754940   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:56.754951   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:56.754958   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:56.757978   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.254531   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:57.254548   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.254556   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.254561   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.258790   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:57.259571   30817 node_ready.go:49] node "ha-565881-m02" has status "Ready":"True"
	I0717 00:21:57.259589   30817 node_ready.go:38] duration metric: took 13.005865099s for node "ha-565881-m02" to be "Ready" ...
	I0717 00:21:57.259601   30817 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:21:57.259673   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:21:57.259687   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.259696   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.259704   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.264123   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:57.269901   30817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.269970   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7wsqq
	I0717 00:21:57.269978   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.269985   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.269989   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.273955   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.275242   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:21:57.275256   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.275267   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.275273   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.278671   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.280064   30817 pod_ready.go:92] pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:57.280078   30817 pod_ready.go:81] duration metric: took 10.155563ms for pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.280087   30817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.280142   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xftzx
	I0717 00:21:57.280150   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.280157   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.280163   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.283712   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.284434   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:21:57.284451   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.284461   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.284466   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.287825   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.288291   30817 pod_ready.go:92] pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:57.288306   30817 pod_ready.go:81] duration metric: took 8.211559ms for pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.288314   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.288365   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881
	I0717 00:21:57.288375   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.288382   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.288386   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.294625   30817 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:21:57.295141   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:21:57.295155   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.295162   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.295166   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.297661   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:57.298177   30817 pod_ready.go:92] pod "etcd-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:57.298192   30817 pod_ready.go:81] duration metric: took 9.872878ms for pod "etcd-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.298202   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.298249   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:57.298256   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.298263   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.298267   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.300843   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:57.301427   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:57.301444   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.301455   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.301460   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.303773   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:57.798433   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:57.798453   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.798461   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.798465   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.800827   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:57.801344   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:57.801358   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.801365   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.801369   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.804962   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:58.298823   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:58.298860   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:58.298873   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:58.298879   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:58.302157   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:58.302829   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:58.302849   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:58.302860   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:58.302865   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:58.305603   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:58.798991   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:58.799016   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:58.799026   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:58.799031   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:58.803532   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:58.804326   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:58.804350   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:58.804359   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:58.804365   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:58.808312   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:59.299268   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:59.299293   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.299307   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.299314   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.302932   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:59.303686   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:59.303704   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.303715   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.303722   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.306666   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:59.307525   30817 pod_ready.go:92] pod "etcd-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:59.307543   30817 pod_ready.go:81] duration metric: took 2.009335864s for pod "etcd-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.307558   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.307612   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881
	I0717 00:21:59.307619   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.307626   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.307630   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.310245   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:59.311000   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:21:59.311018   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.311026   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.311030   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.313220   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:59.313903   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:59.313923   30817 pod_ready.go:81] duration metric: took 6.357608ms for pod "kube-apiserver-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.313934   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.455298   30817 request.go:629] Waited for 141.297144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m02
	I0717 00:21:59.455352   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m02
	I0717 00:21:59.455358   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.455363   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.455367   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.460187   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:59.655404   30817 request.go:629] Waited for 194.399661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:59.655465   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:59.655486   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.655494   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.655501   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.658387   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:59.659016   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:59.659035   30817 pod_ready.go:81] duration metric: took 345.0936ms for pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.659046   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.854757   30817 request.go:629] Waited for 195.632632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881
	I0717 00:21:59.854813   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881
	I0717 00:21:59.854820   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.854831   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.854837   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.857693   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:22:00.054863   30817 request.go:629] Waited for 196.355581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:00.054958   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:00.054972   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.054983   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.054994   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.057789   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:22:00.058526   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:00.058550   30817 pod_ready.go:81] duration metric: took 399.493448ms for pod "kube-controller-manager-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.058564   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.254566   30817 request.go:629] Waited for 195.935874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m02
	I0717 00:22:00.254623   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m02
	I0717 00:22:00.254628   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.254635   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.254639   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.258042   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:00.454980   30817 request.go:629] Waited for 196.362323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:00.455027   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:00.455032   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.455039   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.455044   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.458367   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:00.459262   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:00.459280   30817 pod_ready.go:81] duration metric: took 400.707959ms for pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.459292   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f9rj" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.655371   30817 request.go:629] Waited for 196.019686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f9rj
	I0717 00:22:00.655427   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f9rj
	I0717 00:22:00.655433   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.655440   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.655445   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.659186   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:00.855369   30817 request.go:629] Waited for 195.349188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:00.855451   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:00.855460   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.855472   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.855480   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.858360   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:22:00.858883   30817 pod_ready.go:92] pod "kube-proxy-2f9rj" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:00.858902   30817 pod_ready.go:81] duration metric: took 399.60321ms for pod "kube-proxy-2f9rj" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.858913   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7p2jl" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.055007   30817 request.go:629] Waited for 196.028908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7p2jl
	I0717 00:22:01.055087   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7p2jl
	I0717 00:22:01.055092   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.055101   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.055105   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.058643   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:01.254694   30817 request.go:629] Waited for 195.281962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:01.254744   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:01.254749   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.254756   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.254761   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.257827   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:01.258446   30817 pod_ready.go:92] pod "kube-proxy-7p2jl" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:01.258463   30817 pod_ready.go:81] duration metric: took 399.542723ms for pod "kube-proxy-7p2jl" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.258472   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.454567   30817 request.go:629] Waited for 196.033234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881
	I0717 00:22:01.454628   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881
	I0717 00:22:01.454633   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.454642   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.454648   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.458294   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:01.655412   30817 request.go:629] Waited for 196.392771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:01.655470   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:01.655487   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.655499   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.655507   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.659408   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:01.660202   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:01.660221   30817 pod_ready.go:81] duration metric: took 401.743987ms for pod "kube-scheduler-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.660231   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.855275   30817 request.go:629] Waited for 194.980313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m02
	I0717 00:22:01.855333   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m02
	I0717 00:22:01.855337   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.855344   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.855352   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.857953   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:22:02.055034   30817 request.go:629] Waited for 196.390531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:02.055096   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:02.055101   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.055109   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.055113   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.058656   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:02.059145   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:02.059161   30817 pod_ready.go:81] duration metric: took 398.92395ms for pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:02.059172   30817 pod_ready.go:38] duration metric: took 4.799554499s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:22:02.059188   30817 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:22:02.059233   30817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:22:02.075634   30817 api_server.go:72] duration metric: took 18.222056013s to wait for apiserver process to appear ...
	I0717 00:22:02.075657   30817 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:22:02.075672   30817 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0717 00:22:02.079824   30817 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0717 00:22:02.079877   30817 round_trippers.go:463] GET https://192.168.39.238:8443/version
	I0717 00:22:02.079884   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.079893   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.079899   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.080776   30817 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:22:02.081011   30817 api_server.go:141] control plane version: v1.30.2
	I0717 00:22:02.081029   30817 api_server.go:131] duration metric: took 5.366415ms to wait for apiserver health ...
	I0717 00:22:02.081038   30817 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:22:02.255405   30817 request.go:629] Waited for 174.301249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:22:02.255476   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:22:02.255484   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.255496   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.255505   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.261058   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:22:02.265747   30817 system_pods.go:59] 17 kube-system pods found
	I0717 00:22:02.265771   30817 system_pods.go:61] "coredns-7db6d8ff4d-7wsqq" [4a433e03-decb-405d-82f1-b14a72412c8a] Running
	I0717 00:22:02.265776   30817 system_pods.go:61] "coredns-7db6d8ff4d-xftzx" [01fe6b06-0568-4da7-bd0c-1883bc99995c] Running
	I0717 00:22:02.265779   30817 system_pods.go:61] "etcd-ha-565881" [4971f520-5352-442e-b9a2-0944b0755b7f] Running
	I0717 00:22:02.265782   30817 system_pods.go:61] "etcd-ha-565881-m02" [4566d137-b6d8-4af0-8c19-db42aad855cc] Running
	I0717 00:22:02.265785   30817 system_pods.go:61] "kindnet-5lrdt" [bd3c879a-726b-40ed-ba4f-897bf43cda26] Running
	I0717 00:22:02.265788   30817 system_pods.go:61] "kindnet-k882n" [a1f0c383-2430-4479-90ad-d944476aee6f] Running
	I0717 00:22:02.265791   30817 system_pods.go:61] "kube-apiserver-ha-565881" [ef350ec6-b254-4b11-8130-fb059c05bc73] Running
	I0717 00:22:02.265794   30817 system_pods.go:61] "kube-apiserver-ha-565881-m02" [58bb06fd-18e6-4457-8bd9-82438e5d6e87] Running
	I0717 00:22:02.265798   30817 system_pods.go:61] "kube-controller-manager-ha-565881" [30ebcd5f-fb7b-4877-bc4b-e04de10a184e] Running
	I0717 00:22:02.265802   30817 system_pods.go:61] "kube-controller-manager-ha-565881-m02" [dfc4ee73-fe0f-4ec4-bdb9-3827093d3ea0] Running
	I0717 00:22:02.265804   30817 system_pods.go:61] "kube-proxy-2f9rj" [d5e16caa-15e9-4295-8a9a-0e66912f9f1b] Running
	I0717 00:22:02.265807   30817 system_pods.go:61] "kube-proxy-7p2jl" [74f5aff6-5e99-4cfe-af04-94198e8d9616] Running
	I0717 00:22:02.265810   30817 system_pods.go:61] "kube-scheduler-ha-565881" [876bc7f0-71d6-45b1-a313-d94df8f89f18] Running
	I0717 00:22:02.265813   30817 system_pods.go:61] "kube-scheduler-ha-565881-m02" [9734780b-67c9-4727-badb-f6ba028ba095] Running
	I0717 00:22:02.265816   30817 system_pods.go:61] "kube-vip-ha-565881" [7d058028-c841-4807-936f-3f81c1718a93] Running
	I0717 00:22:02.265819   30817 system_pods.go:61] "kube-vip-ha-565881-m02" [06e40aae-1d32-4577-92f5-32a6ce3e1813] Running
	I0717 00:22:02.265822   30817 system_pods.go:61] "storage-provisioner" [0aa1050a-43e1-4f7a-a2df-80cafb48e673] Running
	I0717 00:22:02.265827   30817 system_pods.go:74] duration metric: took 184.784618ms to wait for pod list to return data ...
	I0717 00:22:02.265836   30817 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:22:02.454630   30817 request.go:629] Waited for 188.73003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:22:02.454708   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:22:02.454714   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.454724   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.454732   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.459193   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:22:02.459520   30817 default_sa.go:45] found service account: "default"
	I0717 00:22:02.459540   30817 default_sa.go:55] duration metric: took 193.698798ms for default service account to be created ...
	I0717 00:22:02.459548   30817 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:22:02.655031   30817 request.go:629] Waited for 195.408916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:22:02.655134   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:22:02.655148   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.655159   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.655170   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.660880   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:22:02.664828   30817 system_pods.go:86] 17 kube-system pods found
	I0717 00:22:02.664850   30817 system_pods.go:89] "coredns-7db6d8ff4d-7wsqq" [4a433e03-decb-405d-82f1-b14a72412c8a] Running
	I0717 00:22:02.664856   30817 system_pods.go:89] "coredns-7db6d8ff4d-xftzx" [01fe6b06-0568-4da7-bd0c-1883bc99995c] Running
	I0717 00:22:02.664869   30817 system_pods.go:89] "etcd-ha-565881" [4971f520-5352-442e-b9a2-0944b0755b7f] Running
	I0717 00:22:02.664873   30817 system_pods.go:89] "etcd-ha-565881-m02" [4566d137-b6d8-4af0-8c19-db42aad855cc] Running
	I0717 00:22:02.664877   30817 system_pods.go:89] "kindnet-5lrdt" [bd3c879a-726b-40ed-ba4f-897bf43cda26] Running
	I0717 00:22:02.664880   30817 system_pods.go:89] "kindnet-k882n" [a1f0c383-2430-4479-90ad-d944476aee6f] Running
	I0717 00:22:02.664884   30817 system_pods.go:89] "kube-apiserver-ha-565881" [ef350ec6-b254-4b11-8130-fb059c05bc73] Running
	I0717 00:22:02.664889   30817 system_pods.go:89] "kube-apiserver-ha-565881-m02" [58bb06fd-18e6-4457-8bd9-82438e5d6e87] Running
	I0717 00:22:02.664893   30817 system_pods.go:89] "kube-controller-manager-ha-565881" [30ebcd5f-fb7b-4877-bc4b-e04de10a184e] Running
	I0717 00:22:02.664897   30817 system_pods.go:89] "kube-controller-manager-ha-565881-m02" [dfc4ee73-fe0f-4ec4-bdb9-3827093d3ea0] Running
	I0717 00:22:02.664900   30817 system_pods.go:89] "kube-proxy-2f9rj" [d5e16caa-15e9-4295-8a9a-0e66912f9f1b] Running
	I0717 00:22:02.664904   30817 system_pods.go:89] "kube-proxy-7p2jl" [74f5aff6-5e99-4cfe-af04-94198e8d9616] Running
	I0717 00:22:02.664908   30817 system_pods.go:89] "kube-scheduler-ha-565881" [876bc7f0-71d6-45b1-a313-d94df8f89f18] Running
	I0717 00:22:02.664911   30817 system_pods.go:89] "kube-scheduler-ha-565881-m02" [9734780b-67c9-4727-badb-f6ba028ba095] Running
	I0717 00:22:02.664915   30817 system_pods.go:89] "kube-vip-ha-565881" [7d058028-c841-4807-936f-3f81c1718a93] Running
	I0717 00:22:02.664918   30817 system_pods.go:89] "kube-vip-ha-565881-m02" [06e40aae-1d32-4577-92f5-32a6ce3e1813] Running
	I0717 00:22:02.664922   30817 system_pods.go:89] "storage-provisioner" [0aa1050a-43e1-4f7a-a2df-80cafb48e673] Running
	I0717 00:22:02.664928   30817 system_pods.go:126] duration metric: took 205.375ms to wait for k8s-apps to be running ...
	I0717 00:22:02.664937   30817 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:22:02.664977   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:22:02.682188   30817 system_svc.go:56] duration metric: took 17.242023ms WaitForService to wait for kubelet
	I0717 00:22:02.682214   30817 kubeadm.go:582] duration metric: took 18.828638273s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:22:02.682234   30817 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:22:02.854614   30817 request.go:629] Waited for 172.294632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes
	I0717 00:22:02.854676   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes
	I0717 00:22:02.854683   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.854694   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.854707   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.857799   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:02.858706   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:22:02.858733   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:22:02.858745   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:22:02.858752   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:22:02.858761   30817 node_conditions.go:105] duration metric: took 176.521225ms to run NodePressure ...
	I0717 00:22:02.858777   30817 start.go:241] waiting for startup goroutines ...
	I0717 00:22:02.858810   30817 start.go:255] writing updated cluster config ...
	I0717 00:22:02.861163   30817 out.go:177] 
	I0717 00:22:02.862755   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:22:02.862879   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:22:02.864551   30817 out.go:177] * Starting "ha-565881-m03" control-plane node in "ha-565881" cluster
	I0717 00:22:02.865908   30817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:22:02.865931   30817 cache.go:56] Caching tarball of preloaded images
	I0717 00:22:02.866022   30817 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:22:02.866032   30817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:22:02.866110   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:22:02.866310   30817 start.go:360] acquireMachinesLock for ha-565881-m03: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:22:02.866349   30817 start.go:364] duration metric: took 20.47µs to acquireMachinesLock for "ha-565881-m03"
	I0717 00:22:02.866362   30817 start.go:93] Provisioning new machine with config: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:22:02.866447   30817 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0717 00:22:02.867988   30817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:22:02.868058   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:22:02.868087   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:22:02.882826   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0717 00:22:02.883258   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:22:02.883692   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:22:02.883710   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:22:02.884029   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:22:02.884205   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetMachineName
	I0717 00:22:02.884369   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:02.884545   30817 start.go:159] libmachine.API.Create for "ha-565881" (driver="kvm2")
	I0717 00:22:02.884592   30817 client.go:168] LocalClient.Create starting
	I0717 00:22:02.884625   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 00:22:02.884659   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:22:02.884674   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:22:02.884720   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 00:22:02.884737   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:22:02.884746   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:22:02.884761   30817 main.go:141] libmachine: Running pre-create checks...
	I0717 00:22:02.884769   30817 main.go:141] libmachine: (ha-565881-m03) Calling .PreCreateCheck
	I0717 00:22:02.884917   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetConfigRaw
	I0717 00:22:02.885337   30817 main.go:141] libmachine: Creating machine...
	I0717 00:22:02.885351   30817 main.go:141] libmachine: (ha-565881-m03) Calling .Create
	I0717 00:22:02.885464   30817 main.go:141] libmachine: (ha-565881-m03) Creating KVM machine...
	I0717 00:22:02.886765   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found existing default KVM network
	I0717 00:22:02.886857   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found existing private KVM network mk-ha-565881
	I0717 00:22:02.887001   30817 main.go:141] libmachine: (ha-565881-m03) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03 ...
	I0717 00:22:02.887025   30817 main.go:141] libmachine: (ha-565881-m03) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:22:02.887055   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:02.886979   31596 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:22:02.887178   30817 main.go:141] libmachine: (ha-565881-m03) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 00:22:03.100976   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:03.100850   31596 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa...
	I0717 00:22:03.546788   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:03.546650   31596 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/ha-565881-m03.rawdisk...
	I0717 00:22:03.546816   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Writing magic tar header
	I0717 00:22:03.546831   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Writing SSH key tar header
	I0717 00:22:03.546841   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:03.546762   31596 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03 ...
	I0717 00:22:03.546874   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03
	I0717 00:22:03.546954   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03 (perms=drwx------)
	I0717 00:22:03.546972   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 00:22:03.546981   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:22:03.547004   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 00:22:03.547016   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 00:22:03.547025   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:22:03.547036   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 00:22:03.547044   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:22:03.547058   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:22:03.547065   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home
	I0717 00:22:03.547073   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Skipping /home - not owner
	I0717 00:22:03.547084   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:22:03.547094   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:22:03.547131   30817 main.go:141] libmachine: (ha-565881-m03) Creating domain...
	I0717 00:22:03.547999   30817 main.go:141] libmachine: (ha-565881-m03) define libvirt domain using xml: 
	I0717 00:22:03.548016   30817 main.go:141] libmachine: (ha-565881-m03) <domain type='kvm'>
	I0717 00:22:03.548026   30817 main.go:141] libmachine: (ha-565881-m03)   <name>ha-565881-m03</name>
	I0717 00:22:03.548038   30817 main.go:141] libmachine: (ha-565881-m03)   <memory unit='MiB'>2200</memory>
	I0717 00:22:03.548045   30817 main.go:141] libmachine: (ha-565881-m03)   <vcpu>2</vcpu>
	I0717 00:22:03.548051   30817 main.go:141] libmachine: (ha-565881-m03)   <features>
	I0717 00:22:03.548060   30817 main.go:141] libmachine: (ha-565881-m03)     <acpi/>
	I0717 00:22:03.548068   30817 main.go:141] libmachine: (ha-565881-m03)     <apic/>
	I0717 00:22:03.548080   30817 main.go:141] libmachine: (ha-565881-m03)     <pae/>
	I0717 00:22:03.548093   30817 main.go:141] libmachine: (ha-565881-m03)     
	I0717 00:22:03.548109   30817 main.go:141] libmachine: (ha-565881-m03)   </features>
	I0717 00:22:03.548125   30817 main.go:141] libmachine: (ha-565881-m03)   <cpu mode='host-passthrough'>
	I0717 00:22:03.548145   30817 main.go:141] libmachine: (ha-565881-m03)   
	I0717 00:22:03.548156   30817 main.go:141] libmachine: (ha-565881-m03)   </cpu>
	I0717 00:22:03.548162   30817 main.go:141] libmachine: (ha-565881-m03)   <os>
	I0717 00:22:03.548167   30817 main.go:141] libmachine: (ha-565881-m03)     <type>hvm</type>
	I0717 00:22:03.548174   30817 main.go:141] libmachine: (ha-565881-m03)     <boot dev='cdrom'/>
	I0717 00:22:03.548181   30817 main.go:141] libmachine: (ha-565881-m03)     <boot dev='hd'/>
	I0717 00:22:03.548187   30817 main.go:141] libmachine: (ha-565881-m03)     <bootmenu enable='no'/>
	I0717 00:22:03.548192   30817 main.go:141] libmachine: (ha-565881-m03)   </os>
	I0717 00:22:03.548197   30817 main.go:141] libmachine: (ha-565881-m03)   <devices>
	I0717 00:22:03.548214   30817 main.go:141] libmachine: (ha-565881-m03)     <disk type='file' device='cdrom'>
	I0717 00:22:03.548224   30817 main.go:141] libmachine: (ha-565881-m03)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/boot2docker.iso'/>
	I0717 00:22:03.548231   30817 main.go:141] libmachine: (ha-565881-m03)       <target dev='hdc' bus='scsi'/>
	I0717 00:22:03.548236   30817 main.go:141] libmachine: (ha-565881-m03)       <readonly/>
	I0717 00:22:03.548243   30817 main.go:141] libmachine: (ha-565881-m03)     </disk>
	I0717 00:22:03.548250   30817 main.go:141] libmachine: (ha-565881-m03)     <disk type='file' device='disk'>
	I0717 00:22:03.548262   30817 main.go:141] libmachine: (ha-565881-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:22:03.548285   30817 main.go:141] libmachine: (ha-565881-m03)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/ha-565881-m03.rawdisk'/>
	I0717 00:22:03.548305   30817 main.go:141] libmachine: (ha-565881-m03)       <target dev='hda' bus='virtio'/>
	I0717 00:22:03.548336   30817 main.go:141] libmachine: (ha-565881-m03)     </disk>
	I0717 00:22:03.548359   30817 main.go:141] libmachine: (ha-565881-m03)     <interface type='network'>
	I0717 00:22:03.548371   30817 main.go:141] libmachine: (ha-565881-m03)       <source network='mk-ha-565881'/>
	I0717 00:22:03.548383   30817 main.go:141] libmachine: (ha-565881-m03)       <model type='virtio'/>
	I0717 00:22:03.548392   30817 main.go:141] libmachine: (ha-565881-m03)     </interface>
	I0717 00:22:03.548402   30817 main.go:141] libmachine: (ha-565881-m03)     <interface type='network'>
	I0717 00:22:03.548413   30817 main.go:141] libmachine: (ha-565881-m03)       <source network='default'/>
	I0717 00:22:03.548423   30817 main.go:141] libmachine: (ha-565881-m03)       <model type='virtio'/>
	I0717 00:22:03.548435   30817 main.go:141] libmachine: (ha-565881-m03)     </interface>
	I0717 00:22:03.548443   30817 main.go:141] libmachine: (ha-565881-m03)     <serial type='pty'>
	I0717 00:22:03.548453   30817 main.go:141] libmachine: (ha-565881-m03)       <target port='0'/>
	I0717 00:22:03.548463   30817 main.go:141] libmachine: (ha-565881-m03)     </serial>
	I0717 00:22:03.548472   30817 main.go:141] libmachine: (ha-565881-m03)     <console type='pty'>
	I0717 00:22:03.548482   30817 main.go:141] libmachine: (ha-565881-m03)       <target type='serial' port='0'/>
	I0717 00:22:03.548491   30817 main.go:141] libmachine: (ha-565881-m03)     </console>
	I0717 00:22:03.548501   30817 main.go:141] libmachine: (ha-565881-m03)     <rng model='virtio'>
	I0717 00:22:03.548512   30817 main.go:141] libmachine: (ha-565881-m03)       <backend model='random'>/dev/random</backend>
	I0717 00:22:03.548522   30817 main.go:141] libmachine: (ha-565881-m03)     </rng>
	I0717 00:22:03.548530   30817 main.go:141] libmachine: (ha-565881-m03)     
	I0717 00:22:03.548542   30817 main.go:141] libmachine: (ha-565881-m03)     
	I0717 00:22:03.548554   30817 main.go:141] libmachine: (ha-565881-m03)   </devices>
	I0717 00:22:03.548587   30817 main.go:141] libmachine: (ha-565881-m03) </domain>
	I0717 00:22:03.548596   30817 main.go:141] libmachine: (ha-565881-m03) 
	I0717 00:22:03.554999   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:fb:f0:3d in network default
	I0717 00:22:03.555533   30817 main.go:141] libmachine: (ha-565881-m03) Ensuring networks are active...
	I0717 00:22:03.555553   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:03.556171   30817 main.go:141] libmachine: (ha-565881-m03) Ensuring network default is active
	I0717 00:22:03.556542   30817 main.go:141] libmachine: (ha-565881-m03) Ensuring network mk-ha-565881 is active
	I0717 00:22:03.556987   30817 main.go:141] libmachine: (ha-565881-m03) Getting domain xml...
	I0717 00:22:03.557752   30817 main.go:141] libmachine: (ha-565881-m03) Creating domain...
	I0717 00:22:04.806677   30817 main.go:141] libmachine: (ha-565881-m03) Waiting to get IP...
	I0717 00:22:04.807572   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:04.808016   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:04.808046   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:04.807995   31596 retry.go:31] will retry after 211.718343ms: waiting for machine to come up
	I0717 00:22:05.021438   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:05.022057   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:05.022086   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:05.022008   31596 retry.go:31] will retry after 265.863837ms: waiting for machine to come up
	I0717 00:22:05.289551   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:05.289951   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:05.289981   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:05.289890   31596 retry.go:31] will retry after 349.875152ms: waiting for machine to come up
	I0717 00:22:05.641527   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:05.642003   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:05.642032   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:05.641961   31596 retry.go:31] will retry after 607.972538ms: waiting for machine to come up
	I0717 00:22:06.251736   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:06.252197   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:06.252232   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:06.252149   31596 retry.go:31] will retry after 697.741072ms: waiting for machine to come up
	I0717 00:22:06.951013   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:06.951421   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:06.951451   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:06.951372   31596 retry.go:31] will retry after 904.364294ms: waiting for machine to come up
	I0717 00:22:07.857282   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:07.857694   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:07.857724   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:07.857653   31596 retry.go:31] will retry after 924.755324ms: waiting for machine to come up
	I0717 00:22:08.783393   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:08.783771   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:08.783792   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:08.783740   31596 retry.go:31] will retry after 1.197183629s: waiting for machine to come up
	I0717 00:22:09.983164   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:09.983593   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:09.983621   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:09.983543   31596 retry.go:31] will retry after 1.710729828s: waiting for machine to come up
	I0717 00:22:11.696577   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:11.696989   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:11.697011   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:11.696955   31596 retry.go:31] will retry after 1.417585787s: waiting for machine to come up
	I0717 00:22:13.115659   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:13.116095   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:13.116125   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:13.116045   31596 retry.go:31] will retry after 2.443611308s: waiting for machine to come up
	I0717 00:22:15.562557   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:15.562962   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:15.562989   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:15.562916   31596 retry.go:31] will retry after 2.303917621s: waiting for machine to come up
	I0717 00:22:17.868306   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:17.868726   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:17.868752   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:17.868683   31596 retry.go:31] will retry after 2.93737042s: waiting for machine to come up
	I0717 00:22:20.809508   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:20.809833   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:20.809861   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:20.809788   31596 retry.go:31] will retry after 5.18911505s: waiting for machine to come up
	I0717 00:22:26.001820   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:26.002387   30817 main.go:141] libmachine: (ha-565881-m03) Found IP for machine: 192.168.39.97
	I0717 00:22:26.002412   30817 main.go:141] libmachine: (ha-565881-m03) Reserving static IP address...
	I0717 00:22:26.002425   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has current primary IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:26.002888   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find host DHCP lease matching {name: "ha-565881-m03", mac: "52:54:00:43:60:7e", ip: "192.168.39.97"} in network mk-ha-565881
	I0717 00:22:26.074647   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Getting to WaitForSSH function...
	I0717 00:22:26.074675   30817 main.go:141] libmachine: (ha-565881-m03) Reserved static IP address: 192.168.39.97
	I0717 00:22:26.074686   30817 main.go:141] libmachine: (ha-565881-m03) Waiting for SSH to be available...
	I0717 00:22:26.077499   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:26.077813   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881
	I0717 00:22:26.077840   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find defined IP address of network mk-ha-565881 interface with MAC address 52:54:00:43:60:7e
	I0717 00:22:26.078046   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using SSH client type: external
	I0717 00:22:26.078075   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa (-rw-------)
	I0717 00:22:26.078122   30817 main.go:141] libmachine: (ha-565881-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:22:26.078152   30817 main.go:141] libmachine: (ha-565881-m03) DBG | About to run SSH command:
	I0717 00:22:26.078169   30817 main.go:141] libmachine: (ha-565881-m03) DBG | exit 0
	I0717 00:22:26.081736   30817 main.go:141] libmachine: (ha-565881-m03) DBG | SSH cmd err, output: exit status 255: 
	I0717 00:22:26.081754   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 00:22:26.081785   30817 main.go:141] libmachine: (ha-565881-m03) DBG | command : exit 0
	I0717 00:22:26.081810   30817 main.go:141] libmachine: (ha-565881-m03) DBG | err     : exit status 255
	I0717 00:22:26.081835   30817 main.go:141] libmachine: (ha-565881-m03) DBG | output  : 
	I0717 00:22:29.083044   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Getting to WaitForSSH function...
	I0717 00:22:29.085550   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.085950   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.085977   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.086093   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using SSH client type: external
	I0717 00:22:29.086117   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa (-rw-------)
	I0717 00:22:29.086146   30817 main.go:141] libmachine: (ha-565881-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:22:29.086171   30817 main.go:141] libmachine: (ha-565881-m03) DBG | About to run SSH command:
	I0717 00:22:29.086185   30817 main.go:141] libmachine: (ha-565881-m03) DBG | exit 0
	I0717 00:22:29.216890   30817 main.go:141] libmachine: (ha-565881-m03) DBG | SSH cmd err, output: <nil>: 
	I0717 00:22:29.217130   30817 main.go:141] libmachine: (ha-565881-m03) KVM machine creation complete!
	I0717 00:22:29.217425   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetConfigRaw
	I0717 00:22:29.217916   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:29.218084   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:29.218244   30817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:22:29.218261   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:22:29.219265   30817 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:22:29.219281   30817 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:22:29.219286   30817 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:22:29.219292   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.221770   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.222160   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.222188   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.222336   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:29.222491   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.222633   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.222801   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:29.222961   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:29.223225   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:29.223244   30817 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:22:29.339981   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:22:29.340035   30817 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:22:29.340049   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.342737   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.343077   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.343101   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.343281   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:29.343467   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.343643   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.343743   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:29.343882   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:29.344075   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:29.344088   30817 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:22:29.457763   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:22:29.457867   30817 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:22:29.457886   30817 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:22:29.457904   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetMachineName
	I0717 00:22:29.458162   30817 buildroot.go:166] provisioning hostname "ha-565881-m03"
	I0717 00:22:29.458186   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetMachineName
	I0717 00:22:29.458373   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.461035   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.461444   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.461474   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.461629   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:29.461805   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.461932   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.462072   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:29.462234   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:29.462405   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:29.462418   30817 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881-m03 && echo "ha-565881-m03" | sudo tee /etc/hostname
	I0717 00:22:29.591957   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881-m03
	
	I0717 00:22:29.591990   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.594904   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.595285   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.595313   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.595472   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:29.595651   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.595825   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.595958   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:29.596162   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:29.596333   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:29.596351   30817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:22:29.722001   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:22:29.722027   30817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:22:29.722046   30817 buildroot.go:174] setting up certificates
	I0717 00:22:29.722055   30817 provision.go:84] configureAuth start
	I0717 00:22:29.722062   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetMachineName
	I0717 00:22:29.722320   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:22:29.724993   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.725341   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.725369   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.725486   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.727638   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.727941   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.727963   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.728092   30817 provision.go:143] copyHostCerts
	I0717 00:22:29.728133   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:22:29.728161   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:22:29.728170   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:22:29.728231   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:22:29.728311   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:22:29.728329   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:22:29.728335   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:22:29.728359   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:22:29.728423   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:22:29.728438   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:22:29.728444   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:22:29.728464   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:22:29.728533   30817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881-m03 san=[127.0.0.1 192.168.39.97 ha-565881-m03 localhost minikube]
	I0717 00:22:30.102761   30817 provision.go:177] copyRemoteCerts
	I0717 00:22:30.102834   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:22:30.102888   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.105368   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.105688   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.105712   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.105899   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.106098   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.106261   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.106394   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:22:30.190756   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:22:30.190838   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:22:30.218145   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:22:30.218218   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:22:30.245610   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:22:30.245686   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:22:30.272316   30817 provision.go:87] duration metric: took 550.249946ms to configureAuth
	I0717 00:22:30.272341   30817 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:22:30.272532   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:22:30.272633   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.276262   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.276690   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.276715   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.276901   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.277104   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.277260   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.277375   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.277517   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:30.277667   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:30.277683   30817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:22:30.557275   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:22:30.557300   30817 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:22:30.557311   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetURL
	I0717 00:22:30.558689   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using libvirt version 6000000
	I0717 00:22:30.560704   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.561108   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.561136   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.561265   30817 main.go:141] libmachine: Docker is up and running!
	I0717 00:22:30.561279   30817 main.go:141] libmachine: Reticulating splines...
	I0717 00:22:30.561285   30817 client.go:171] duration metric: took 27.676684071s to LocalClient.Create
	I0717 00:22:30.561307   30817 start.go:167] duration metric: took 27.676764164s to libmachine.API.Create "ha-565881"
	I0717 00:22:30.561316   30817 start.go:293] postStartSetup for "ha-565881-m03" (driver="kvm2")
	I0717 00:22:30.561324   30817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:22:30.561341   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.561582   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:22:30.561610   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.563489   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.563836   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.563863   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.563967   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.564128   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.564289   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.564396   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:22:30.656469   30817 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:22:30.660891   30817 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:22:30.660912   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:22:30.660982   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:22:30.661071   30817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:22:30.661082   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:22:30.661189   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:22:30.671200   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:22:30.695582   30817 start.go:296] duration metric: took 134.255665ms for postStartSetup
	I0717 00:22:30.695629   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetConfigRaw
	I0717 00:22:30.696197   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:22:30.698630   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.698951   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.698983   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.699238   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:22:30.699526   30817 start.go:128] duration metric: took 27.833068299s to createHost
	I0717 00:22:30.699550   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.701769   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.702109   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.702135   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.702261   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.702431   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.702598   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.702713   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.702875   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:30.703038   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:30.703052   30817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:22:30.821178   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721175750.800249413
	
	I0717 00:22:30.821207   30817 fix.go:216] guest clock: 1721175750.800249413
	I0717 00:22:30.821214   30817 fix.go:229] Guest: 2024-07-17 00:22:30.800249413 +0000 UTC Remote: 2024-07-17 00:22:30.699539055 +0000 UTC m=+151.992231366 (delta=100.710358ms)
	I0717 00:22:30.821235   30817 fix.go:200] guest clock delta is within tolerance: 100.710358ms
	I0717 00:22:30.821242   30817 start.go:83] releasing machines lock for "ha-565881-m03", held for 27.95488658s
	I0717 00:22:30.821268   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.821510   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:22:30.824447   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.824878   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.824919   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.826850   30817 out.go:177] * Found network options:
	I0717 00:22:30.828168   30817 out.go:177]   - NO_PROXY=192.168.39.238,192.168.39.14
	W0717 00:22:30.829541   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 00:22:30.829573   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:22:30.829591   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.830154   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.830371   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.830475   30817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:22:30.830511   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	W0717 00:22:30.830589   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 00:22:30.830622   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:22:30.830689   30817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:22:30.830713   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.833259   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.833280   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.833624   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.833671   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.833716   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.833740   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.833881   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.834002   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.834085   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.834148   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.834223   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.834286   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.834356   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:22:30.834405   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:22:31.070544   30817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:22:31.077562   30817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:22:31.077642   30817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:22:31.096361   30817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:22:31.096385   30817 start.go:495] detecting cgroup driver to use...
	I0717 00:22:31.096449   30817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:22:31.113441   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:22:31.128116   30817 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:22:31.128168   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:22:31.142089   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:22:31.157273   30817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:22:31.274897   30817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:22:31.417373   30817 docker.go:233] disabling docker service ...
	I0717 00:22:31.417435   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:22:31.432043   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:22:31.444871   30817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:22:31.586219   30817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:22:31.711201   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:22:31.725226   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:22:31.744010   30817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:22:31.744064   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.754493   30817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:22:31.754549   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.764857   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.774815   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.786360   30817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:22:31.797592   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.809735   30817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.827409   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.838541   30817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:22:31.848933   30817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:22:31.848988   30817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:22:31.863023   30817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:22:31.873177   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:22:31.996760   30817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:22:32.139217   30817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:22:32.139301   30817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:22:32.144588   30817 start.go:563] Will wait 60s for crictl version
	I0717 00:22:32.144652   30817 ssh_runner.go:195] Run: which crictl
	I0717 00:22:32.148444   30817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:22:32.194079   30817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:22:32.194170   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:22:32.227119   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:22:32.257889   30817 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:22:32.259114   30817 out.go:177]   - env NO_PROXY=192.168.39.238
	I0717 00:22:32.260362   30817 out.go:177]   - env NO_PROXY=192.168.39.238,192.168.39.14
	I0717 00:22:32.261676   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:22:32.263900   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:32.264277   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:32.264300   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:32.264522   30817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:22:32.268958   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:22:32.282000   30817 mustload.go:65] Loading cluster: ha-565881
	I0717 00:22:32.282214   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:22:32.282490   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:22:32.282531   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:22:32.296902   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0717 00:22:32.297298   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:22:32.297737   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:22:32.297763   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:22:32.298097   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:22:32.298290   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:22:32.300113   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:22:32.300385   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:22:32.300421   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:22:32.314892   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I0717 00:22:32.315291   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:22:32.315713   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:22:32.315733   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:22:32.315985   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:22:32.316185   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:22:32.316331   30817 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.97
	I0717 00:22:32.316344   30817 certs.go:194] generating shared ca certs ...
	I0717 00:22:32.316360   30817 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:22:32.316500   30817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:22:32.316551   30817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:22:32.316572   30817 certs.go:256] generating profile certs ...
	I0717 00:22:32.316659   30817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:22:32.316692   30817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.65a8b113
	I0717 00:22:32.316711   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.65a8b113 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.14 192.168.39.97 192.168.39.254]
	I0717 00:22:32.429859   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.65a8b113 ...
	I0717 00:22:32.429892   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.65a8b113: {Name:mkb173c5cf13ec370191e3cf7b873ed5811cd7be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:22:32.430072   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.65a8b113 ...
	I0717 00:22:32.430084   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.65a8b113: {Name:mk641c824f290b6f90aafcb698fd5c766c8aba2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:22:32.430165   30817 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.65a8b113 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:22:32.430307   30817 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.65a8b113 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:22:32.430442   30817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:22:32.430460   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:22:32.430474   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:22:32.430489   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:22:32.430502   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:22:32.430513   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:22:32.430530   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:22:32.430544   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:22:32.430555   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:22:32.430604   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:22:32.430634   30817 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:22:32.430645   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:22:32.430670   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:22:32.430696   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:22:32.430723   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:22:32.430765   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:22:32.430794   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:22:32.430809   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:22:32.430823   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:22:32.430864   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:22:32.433531   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:22:32.433903   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:22:32.433930   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:22:32.434116   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:22:32.434313   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:22:32.434460   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:22:32.434577   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:22:32.512988   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 00:22:32.518081   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 00:22:32.529751   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 00:22:32.534297   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 00:22:32.546070   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 00:22:32.550827   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 00:22:32.561467   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 00:22:32.565939   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 00:22:32.576500   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 00:22:32.581011   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 00:22:32.592147   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 00:22:32.596865   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 00:22:32.608689   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:22:32.637050   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:22:32.663322   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:22:32.688733   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:22:32.713967   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0717 00:22:32.740232   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:22:32.765991   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:22:32.789500   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:22:32.813392   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:22:32.840594   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:22:32.866280   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:22:32.892068   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 00:22:32.909507   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 00:22:32.927221   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 00:22:32.945031   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 00:22:32.962994   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 00:22:32.979730   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 00:22:32.996113   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 00:22:33.012363   30817 ssh_runner.go:195] Run: openssl version
	I0717 00:22:33.018269   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:22:33.029243   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:22:33.033500   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:22:33.033543   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:22:33.039222   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:22:33.049999   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:22:33.060608   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:22:33.065264   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:22:33.065322   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:22:33.071592   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:22:33.083902   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:22:33.095304   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:22:33.099722   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:22:33.099766   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:22:33.105677   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:22:33.116949   30817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:22:33.120835   30817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:22:33.120884   30817 kubeadm.go:934] updating node {m03 192.168.39.97 8443 v1.30.2 crio true true} ...
	I0717 00:22:33.120966   30817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:22:33.120988   30817 kube-vip.go:115] generating kube-vip config ...
	I0717 00:22:33.121019   30817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:22:33.138474   30817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:22:33.138541   30817 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:22:33.138596   30817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:22:33.147765   30817 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 00:22:33.147810   30817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 00:22:33.157388   30817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0717 00:22:33.157413   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:22:33.157415   30817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 00:22:33.157429   30817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0717 00:22:33.157435   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:22:33.157464   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:22:33.157475   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:22:33.157500   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:22:33.171689   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:22:33.171743   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 00:22:33.171772   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 00:22:33.171779   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:22:33.171694   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 00:22:33.171882   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 00:22:33.189423   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 00:22:33.189458   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 00:22:34.038361   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 00:22:34.047755   30817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 00:22:34.064851   30817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:22:34.083696   30817 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:22:34.101996   30817 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:22:34.106031   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:22:34.118342   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:22:34.257388   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:22:34.279588   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:22:34.279924   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:22:34.279968   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:22:34.295679   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I0717 00:22:34.296113   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:22:34.296738   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:22:34.296771   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:22:34.297155   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:22:34.297334   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:22:34.297539   30817 start.go:317] joinCluster: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:22:34.297694   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 00:22:34.297714   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:22:34.301080   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:22:34.301631   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:22:34.301654   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:22:34.301921   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:22:34.302108   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:22:34.302261   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:22:34.302408   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:22:34.464709   30817 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:22:34.464765   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ulp9g8.7cfxncvt58ljnnv6 --discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565881-m03 --control-plane --apiserver-advertise-address=192.168.39.97 --apiserver-bind-port=8443"
	I0717 00:22:58.410484   30817 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ulp9g8.7cfxncvt58ljnnv6 --discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565881-m03 --control-plane --apiserver-advertise-address=192.168.39.97 --apiserver-bind-port=8443": (23.94569319s)
	I0717 00:22:58.410524   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 00:22:58.930350   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565881-m03 minikube.k8s.io/updated_at=2024_07_17T00_22_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-565881 minikube.k8s.io/primary=false
	I0717 00:22:59.059327   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565881-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 00:22:59.190930   30817 start.go:319] duration metric: took 24.893385889s to joinCluster
	I0717 00:22:59.191009   30817 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:22:59.191370   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:22:59.192680   30817 out.go:177] * Verifying Kubernetes components...
	I0717 00:22:59.194358   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:22:59.478074   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:22:59.513516   30817 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:22:59.513836   30817 kapi.go:59] client config for ha-565881: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 00:22:59.513912   30817 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.238:8443
	I0717 00:22:59.514182   30817 node_ready.go:35] waiting up to 6m0s for node "ha-565881-m03" to be "Ready" ...
	I0717 00:22:59.514255   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:22:59.514265   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:59.514280   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:59.514289   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:59.517540   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:00.014851   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:00.014874   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:00.014883   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:00.014891   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:00.018750   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:00.514832   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:00.514858   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:00.514870   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:00.514874   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:00.519444   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:01.014782   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:01.014805   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:01.014813   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:01.014817   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:01.018702   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:01.514670   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:01.514698   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:01.514706   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:01.514709   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:01.519010   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:01.519823   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:02.015202   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:02.015226   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:02.015237   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:02.015245   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:02.018448   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:02.514669   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:02.514692   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:02.514699   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:02.514703   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:02.518679   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:03.015337   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:03.015357   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:03.015365   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:03.015368   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:03.019663   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:03.514346   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:03.514367   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:03.514374   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:03.514378   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:03.517411   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:04.014511   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:04.014529   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:04.014537   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:04.014542   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:04.018629   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:04.020041   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:04.514874   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:04.514898   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:04.514907   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:04.514910   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:04.518316   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:05.015002   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:05.015026   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:05.015042   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:05.015047   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:05.018792   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:05.514817   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:05.514843   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:05.514856   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:05.514862   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:05.520005   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:23:06.015193   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:06.015216   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:06.015226   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:06.015232   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:06.019436   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:06.020540   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:06.514977   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:06.514997   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:06.515005   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:06.515010   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:06.518528   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:07.014508   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:07.014530   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:07.014550   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:07.014554   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:07.017786   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:07.514542   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:07.514564   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:07.514571   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:07.514576   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:07.518371   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:08.014796   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:08.014822   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:08.014832   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:08.014837   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:08.019112   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:08.515154   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:08.515183   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:08.515193   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:08.515199   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:08.518568   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:08.519433   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:09.014980   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:09.015002   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:09.015017   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:09.015022   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:09.019391   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:09.515090   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:09.515112   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:09.515120   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:09.515124   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:09.519083   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:10.014440   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:10.014471   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:10.014479   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:10.014483   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:10.017804   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:10.514764   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:10.514785   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:10.514793   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:10.514796   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:10.518279   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:11.015416   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:11.015437   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:11.015446   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:11.015451   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:11.019155   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:11.019686   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:11.515170   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:11.515208   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:11.515218   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:11.515224   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:11.519110   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:12.015019   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:12.015042   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:12.015052   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:12.015058   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:12.019573   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:12.514641   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:12.514674   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:12.514682   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:12.514685   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:12.518420   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.015241   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.015261   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.015269   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.015273   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.018764   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.019369   30817 node_ready.go:49] node "ha-565881-m03" has status "Ready":"True"
	I0717 00:23:13.019387   30817 node_ready.go:38] duration metric: took 13.505188759s for node "ha-565881-m03" to be "Ready" ...
	I0717 00:23:13.019394   30817 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:23:13.019453   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:13.019465   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.019472   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.019477   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.026342   30817 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:23:13.035633   30817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.035728   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7wsqq
	I0717 00:23:13.035741   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.035751   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.035760   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.038501   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.039113   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:13.039127   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.039133   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.039138   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.041530   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.042212   30817 pod_ready.go:92] pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:13.042235   30817 pod_ready.go:81] duration metric: took 6.575818ms for pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.042245   30817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.042304   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xftzx
	I0717 00:23:13.042315   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.042325   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.042335   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.045410   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.045900   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:13.045917   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.045925   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.045929   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.048290   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.048764   30817 pod_ready.go:92] pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:13.048780   30817 pod_ready.go:81] duration metric: took 6.528388ms for pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.048791   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.048849   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881
	I0717 00:23:13.048861   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.048870   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.048876   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.051348   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.051796   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:13.051808   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.051815   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.051819   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.054698   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.055559   30817 pod_ready.go:92] pod "etcd-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:13.055578   30817 pod_ready.go:81] duration metric: took 6.779522ms for pod "etcd-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.055590   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.055646   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:23:13.055656   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.055666   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.055674   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.059245   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.060123   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:13.060141   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.060151   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.060156   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.072051   30817 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0717 00:23:13.072588   30817 pod_ready.go:92] pod "etcd-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:13.072607   30817 pod_ready.go:81] duration metric: took 17.009719ms for pod "etcd-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.072616   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.215991   30817 request.go:629] Waited for 143.316913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:13.216073   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:13.216080   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.216092   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.216103   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.220188   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:13.415421   30817 request.go:629] Waited for 194.29659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.415482   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.415489   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.415497   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.415501   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.419268   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.615369   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:13.615389   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.615397   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.615402   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.618753   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.815456   30817 request.go:629] Waited for 196.064615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.815542   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.815548   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.815556   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.815565   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.819217   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:14.073709   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:14.073731   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:14.073739   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:14.073745   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:14.076969   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:14.216213   30817 request.go:629] Waited for 138.237276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:14.216278   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:14.216286   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:14.216295   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:14.216300   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:14.219940   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:14.573255   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:14.573279   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:14.573289   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:14.573294   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:14.577343   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:14.615374   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:14.615408   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:14.615416   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:14.615421   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:14.618773   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.073373   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:15.073395   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.073406   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.073412   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.077186   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.078010   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:15.078029   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.078039   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.078046   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.080986   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:15.081634   30817 pod_ready.go:92] pod "etcd-ha-565881-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:15.081652   30817 pod_ready.go:81] duration metric: took 2.009029844s for pod "etcd-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.081668   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.216016   30817 request.go:629] Waited for 134.296133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881
	I0717 00:23:15.216072   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881
	I0717 00:23:15.216077   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.216084   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.216089   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.219511   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.415725   30817 request.go:629] Waited for 195.353261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:15.415778   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:15.415783   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.415791   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.415797   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.419068   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.419812   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:15.419837   30817 pod_ready.go:81] duration metric: took 338.159133ms for pod "kube-apiserver-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.419851   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.615891   30817 request.go:629] Waited for 195.979681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m02
	I0717 00:23:15.616011   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m02
	I0717 00:23:15.616021   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.616028   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.616033   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.619567   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.815604   30817 request.go:629] Waited for 195.354554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:15.815667   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:15.815672   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.815680   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.815686   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.819581   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.820216   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:15.820238   30817 pod_ready.go:81] duration metric: took 400.379052ms for pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.820250   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.015261   30817 request.go:629] Waited for 194.946962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m03
	I0717 00:23:16.015322   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m03
	I0717 00:23:16.015327   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.015335   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.015340   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.020361   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:23:16.215777   30817 request.go:629] Waited for 194.358244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:16.215858   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:16.215866   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.215878   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.215886   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.219553   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:16.220680   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:16.220701   30817 pod_ready.go:81] duration metric: took 400.441569ms for pod "kube-apiserver-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.220711   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.415806   30817 request.go:629] Waited for 195.030033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881
	I0717 00:23:16.415868   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881
	I0717 00:23:16.415873   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.415881   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.415884   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.419707   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:16.615765   30817 request.go:629] Waited for 195.369569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:16.615830   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:16.615835   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.615842   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.615847   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.619918   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:16.620498   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:16.620518   30817 pod_ready.go:81] duration metric: took 399.798082ms for pod "kube-controller-manager-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.620531   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.815644   30817 request.go:629] Waited for 195.032644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m02
	I0717 00:23:16.815702   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m02
	I0717 00:23:16.815709   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.815716   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.815723   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.818996   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.015998   30817 request.go:629] Waited for 196.358363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:17.016111   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:17.016122   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.016130   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.016134   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.019563   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.020035   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:17.020057   30817 pod_ready.go:81] duration metric: took 399.517092ms for pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.020070   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.216169   30817 request.go:629] Waited for 196.033808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m03
	I0717 00:23:17.216246   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m03
	I0717 00:23:17.216251   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.216258   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.216266   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.220549   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:17.415628   30817 request.go:629] Waited for 193.57967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:17.415685   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:17.415690   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.415698   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.415702   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.419208   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.419940   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:17.419958   30817 pod_ready.go:81] duration metric: took 399.881416ms for pod "kube-controller-manager-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.419969   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f9rj" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.616061   30817 request.go:629] Waited for 196.018703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f9rj
	I0717 00:23:17.616123   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f9rj
	I0717 00:23:17.616129   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.616137   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.616142   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.619667   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.815557   30817 request.go:629] Waited for 195.164155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:17.815610   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:17.815618   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.815625   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.815630   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.818946   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.819794   30817 pod_ready.go:92] pod "kube-proxy-2f9rj" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:17.819813   30817 pod_ready.go:81] duration metric: took 399.826808ms for pod "kube-proxy-2f9rj" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.819826   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7p2jl" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.016159   30817 request.go:629] Waited for 196.266113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7p2jl
	I0717 00:23:18.016245   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7p2jl
	I0717 00:23:18.016257   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.016268   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.016277   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.019661   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:18.215718   30817 request.go:629] Waited for 195.353457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:18.215791   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:18.215798   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.215809   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.215814   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.219415   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:18.220029   30817 pod_ready.go:92] pod "kube-proxy-7p2jl" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:18.220049   30817 pod_ready.go:81] duration metric: took 400.214022ms for pod "kube-proxy-7p2jl" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.220059   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k5x6x" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.416062   30817 request.go:629] Waited for 195.938205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k5x6x
	I0717 00:23:18.416119   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k5x6x
	I0717 00:23:18.416125   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.416131   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.416135   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.420688   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:18.615740   30817 request.go:629] Waited for 194.365134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:18.615819   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:18.615830   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.615838   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.615845   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.619901   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:18.620633   30817 pod_ready.go:92] pod "kube-proxy-k5x6x" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:18.620654   30817 pod_ready.go:81] duration metric: took 400.588373ms for pod "kube-proxy-k5x6x" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.620667   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.816026   30817 request.go:629] Waited for 195.241694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881
	I0717 00:23:18.816085   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881
	I0717 00:23:18.816090   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.816098   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.816101   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.819500   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:19.015301   30817 request.go:629] Waited for 194.805861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:19.015391   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:19.015405   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.015413   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.015417   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.019741   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:19.020440   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:19.020462   30817 pod_ready.go:81] duration metric: took 399.785274ms for pod "kube-scheduler-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.020475   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.215528   30817 request.go:629] Waited for 194.97553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m02
	I0717 00:23:19.215589   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m02
	I0717 00:23:19.215598   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.215605   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.215609   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.219123   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:19.416233   30817 request.go:629] Waited for 196.398252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:19.416281   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:19.416287   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.416294   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.416299   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.419669   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:19.420418   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:19.420438   30817 pod_ready.go:81] duration metric: took 399.955187ms for pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.420447   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.615368   30817 request.go:629] Waited for 194.859433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m03
	I0717 00:23:19.615436   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m03
	I0717 00:23:19.615441   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.615449   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.615453   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.619062   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:19.816298   30817 request.go:629] Waited for 196.280861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:19.816381   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:19.816389   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.816404   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.816414   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.820466   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:19.821245   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:19.821286   30817 pod_ready.go:81] duration metric: took 400.822243ms for pod "kube-scheduler-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.821306   30817 pod_ready.go:38] duration metric: took 6.801901637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:23:19.821328   30817 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:23:19.821397   30817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:23:19.839119   30817 api_server.go:72] duration metric: took 20.648070367s to wait for apiserver process to appear ...
	I0717 00:23:19.839144   30817 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:23:19.839165   30817 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0717 00:23:19.843248   30817 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0717 00:23:19.843334   30817 round_trippers.go:463] GET https://192.168.39.238:8443/version
	I0717 00:23:19.843344   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.843352   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.843359   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.844189   30817 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:23:19.844258   30817 api_server.go:141] control plane version: v1.30.2
	I0717 00:23:19.844275   30817 api_server.go:131] duration metric: took 5.124245ms to wait for apiserver health ...
	I0717 00:23:19.844286   30817 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:23:20.015736   30817 request.go:629] Waited for 171.346584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:20.015793   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:20.015798   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:20.015806   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:20.015811   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:20.022896   30817 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:23:20.029400   30817 system_pods.go:59] 24 kube-system pods found
	I0717 00:23:20.029427   30817 system_pods.go:61] "coredns-7db6d8ff4d-7wsqq" [4a433e03-decb-405d-82f1-b14a72412c8a] Running
	I0717 00:23:20.029438   30817 system_pods.go:61] "coredns-7db6d8ff4d-xftzx" [01fe6b06-0568-4da7-bd0c-1883bc99995c] Running
	I0717 00:23:20.029442   30817 system_pods.go:61] "etcd-ha-565881" [4971f520-5352-442e-b9a2-0944b0755b7f] Running
	I0717 00:23:20.029446   30817 system_pods.go:61] "etcd-ha-565881-m02" [4566d137-b6d8-4af0-8c19-db42aad855cc] Running
	I0717 00:23:20.029450   30817 system_pods.go:61] "etcd-ha-565881-m03" [dada7623-e9d0-4848-a760-4a0a7f63990e] Running
	I0717 00:23:20.029453   30817 system_pods.go:61] "kindnet-5lrdt" [bd3c879a-726b-40ed-ba4f-897bf43cda26] Running
	I0717 00:23:20.029456   30817 system_pods.go:61] "kindnet-ctstx" [84c6251a-f4d9-4bd5-813e-52c72e3a5a83] Running
	I0717 00:23:20.029459   30817 system_pods.go:61] "kindnet-k882n" [a1f0c383-2430-4479-90ad-d944476aee6f] Running
	I0717 00:23:20.029462   30817 system_pods.go:61] "kube-apiserver-ha-565881" [ef350ec6-b254-4b11-8130-fb059c05bc73] Running
	I0717 00:23:20.029468   30817 system_pods.go:61] "kube-apiserver-ha-565881-m02" [58bb06fd-18e6-4457-8bd9-82438e5d6e87] Running
	I0717 00:23:20.029471   30817 system_pods.go:61] "kube-apiserver-ha-565881-m03" [f4678e70-6416-4623-a8b1-ddb0a1c31843] Running
	I0717 00:23:20.029476   30817 system_pods.go:61] "kube-controller-manager-ha-565881" [30ebcd5f-fb7b-4877-bc4b-e04de10a184e] Running
	I0717 00:23:20.029480   30817 system_pods.go:61] "kube-controller-manager-ha-565881-m02" [dfc4ee73-fe0f-4ec4-bdb9-3827093d3ea0] Running
	I0717 00:23:20.029491   30817 system_pods.go:61] "kube-controller-manager-ha-565881-m03" [8f256263-ae87-4500-9367-bbdfe67effd6] Running
	I0717 00:23:20.029494   30817 system_pods.go:61] "kube-proxy-2f9rj" [d5e16caa-15e9-4295-8a9a-0e66912f9f1b] Running
	I0717 00:23:20.029497   30817 system_pods.go:61] "kube-proxy-7p2jl" [74f5aff6-5e99-4cfe-af04-94198e8d9616] Running
	I0717 00:23:20.029500   30817 system_pods.go:61] "kube-proxy-k5x6x" [d6bf8a53-e66d-4e97-b1f4-470c70ee87e2] Running
	I0717 00:23:20.029503   30817 system_pods.go:61] "kube-scheduler-ha-565881" [876bc7f0-71d6-45b1-a313-d94df8f89f18] Running
	I0717 00:23:20.029506   30817 system_pods.go:61] "kube-scheduler-ha-565881-m02" [9734780b-67c9-4727-badb-f6ba028ba095] Running
	I0717 00:23:20.029509   30817 system_pods.go:61] "kube-scheduler-ha-565881-m03" [5e074a3c-dff5-4df9-aa3b-deb2e8e6cdde] Running
	I0717 00:23:20.029512   30817 system_pods.go:61] "kube-vip-ha-565881" [7d058028-c841-4807-936f-3f81c1718a93] Running
	I0717 00:23:20.029515   30817 system_pods.go:61] "kube-vip-ha-565881-m02" [06e40aae-1d32-4577-92f5-32a6ce3e1813] Running
	I0717 00:23:20.029518   30817 system_pods.go:61] "kube-vip-ha-565881-m03" [85f81bf9-9465-4eaf-ba50-7aac4090d563] Running
	I0717 00:23:20.029523   30817 system_pods.go:61] "storage-provisioner" [0aa1050a-43e1-4f7a-a2df-80cafb48e673] Running
	I0717 00:23:20.029531   30817 system_pods.go:74] duration metric: took 185.238424ms to wait for pod list to return data ...
	I0717 00:23:20.029541   30817 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:23:20.215985   30817 request.go:629] Waited for 186.373366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:23:20.216060   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:23:20.216066   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:20.216073   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:20.216080   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:20.219992   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:20.220125   30817 default_sa.go:45] found service account: "default"
	I0717 00:23:20.220141   30817 default_sa.go:55] duration metric: took 190.590459ms for default service account to be created ...
	I0717 00:23:20.220151   30817 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:23:20.415501   30817 request.go:629] Waited for 195.283071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:20.415586   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:20.415618   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:20.415630   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:20.415634   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:20.422110   30817 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:23:20.430762   30817 system_pods.go:86] 24 kube-system pods found
	I0717 00:23:20.430788   30817 system_pods.go:89] "coredns-7db6d8ff4d-7wsqq" [4a433e03-decb-405d-82f1-b14a72412c8a] Running
	I0717 00:23:20.430793   30817 system_pods.go:89] "coredns-7db6d8ff4d-xftzx" [01fe6b06-0568-4da7-bd0c-1883bc99995c] Running
	I0717 00:23:20.430797   30817 system_pods.go:89] "etcd-ha-565881" [4971f520-5352-442e-b9a2-0944b0755b7f] Running
	I0717 00:23:20.430801   30817 system_pods.go:89] "etcd-ha-565881-m02" [4566d137-b6d8-4af0-8c19-db42aad855cc] Running
	I0717 00:23:20.430804   30817 system_pods.go:89] "etcd-ha-565881-m03" [dada7623-e9d0-4848-a760-4a0a7f63990e] Running
	I0717 00:23:20.430808   30817 system_pods.go:89] "kindnet-5lrdt" [bd3c879a-726b-40ed-ba4f-897bf43cda26] Running
	I0717 00:23:20.430812   30817 system_pods.go:89] "kindnet-ctstx" [84c6251a-f4d9-4bd5-813e-52c72e3a5a83] Running
	I0717 00:23:20.430816   30817 system_pods.go:89] "kindnet-k882n" [a1f0c383-2430-4479-90ad-d944476aee6f] Running
	I0717 00:23:20.430819   30817 system_pods.go:89] "kube-apiserver-ha-565881" [ef350ec6-b254-4b11-8130-fb059c05bc73] Running
	I0717 00:23:20.430824   30817 system_pods.go:89] "kube-apiserver-ha-565881-m02" [58bb06fd-18e6-4457-8bd9-82438e5d6e87] Running
	I0717 00:23:20.430828   30817 system_pods.go:89] "kube-apiserver-ha-565881-m03" [f4678e70-6416-4623-a8b1-ddb0a1c31843] Running
	I0717 00:23:20.430834   30817 system_pods.go:89] "kube-controller-manager-ha-565881" [30ebcd5f-fb7b-4877-bc4b-e04de10a184e] Running
	I0717 00:23:20.430840   30817 system_pods.go:89] "kube-controller-manager-ha-565881-m02" [dfc4ee73-fe0f-4ec4-bdb9-3827093d3ea0] Running
	I0717 00:23:20.430847   30817 system_pods.go:89] "kube-controller-manager-ha-565881-m03" [8f256263-ae87-4500-9367-bbdfe67effd6] Running
	I0717 00:23:20.430856   30817 system_pods.go:89] "kube-proxy-2f9rj" [d5e16caa-15e9-4295-8a9a-0e66912f9f1b] Running
	I0717 00:23:20.430862   30817 system_pods.go:89] "kube-proxy-7p2jl" [74f5aff6-5e99-4cfe-af04-94198e8d9616] Running
	I0717 00:23:20.430871   30817 system_pods.go:89] "kube-proxy-k5x6x" [d6bf8a53-e66d-4e97-b1f4-470c70ee87e2] Running
	I0717 00:23:20.430878   30817 system_pods.go:89] "kube-scheduler-ha-565881" [876bc7f0-71d6-45b1-a313-d94df8f89f18] Running
	I0717 00:23:20.430887   30817 system_pods.go:89] "kube-scheduler-ha-565881-m02" [9734780b-67c9-4727-badb-f6ba028ba095] Running
	I0717 00:23:20.430893   30817 system_pods.go:89] "kube-scheduler-ha-565881-m03" [5e074a3c-dff5-4df9-aa3b-deb2e8e6cdde] Running
	I0717 00:23:20.430899   30817 system_pods.go:89] "kube-vip-ha-565881" [7d058028-c841-4807-936f-3f81c1718a93] Running
	I0717 00:23:20.430907   30817 system_pods.go:89] "kube-vip-ha-565881-m02" [06e40aae-1d32-4577-92f5-32a6ce3e1813] Running
	I0717 00:23:20.430913   30817 system_pods.go:89] "kube-vip-ha-565881-m03" [85f81bf9-9465-4eaf-ba50-7aac4090d563] Running
	I0717 00:23:20.430921   30817 system_pods.go:89] "storage-provisioner" [0aa1050a-43e1-4f7a-a2df-80cafb48e673] Running
	I0717 00:23:20.430927   30817 system_pods.go:126] duration metric: took 210.770682ms to wait for k8s-apps to be running ...
	I0717 00:23:20.430936   30817 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:23:20.430982   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:23:20.446693   30817 system_svc.go:56] duration metric: took 15.749024ms WaitForService to wait for kubelet
	I0717 00:23:20.446720   30817 kubeadm.go:582] duration metric: took 21.255674297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:23:20.446754   30817 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:23:20.616184   30817 request.go:629] Waited for 169.340619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes
	I0717 00:23:20.616242   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes
	I0717 00:23:20.616247   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:20.616254   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:20.616258   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:20.620476   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:20.622374   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:23:20.622400   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:23:20.622414   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:23:20.622418   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:23:20.622423   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:23:20.622428   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:23:20.622433   30817 node_conditions.go:105] duration metric: took 175.670539ms to run NodePressure ...
	I0717 00:23:20.622449   30817 start.go:241] waiting for startup goroutines ...
	I0717 00:23:20.622474   30817 start.go:255] writing updated cluster config ...
	I0717 00:23:20.622902   30817 ssh_runner.go:195] Run: rm -f paused
	I0717 00:23:20.675499   30817 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:23:20.678010   30817 out.go:177] * Done! kubectl is now configured to use "ha-565881" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.031671695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0540dc27-4a1b-4a47-8a6e-01bee4a38bef name=/runtime.v1.RuntimeService/Version
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.033146441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ecae2a3-416a-4e43-86b4-326e08c99bf3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.033598210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176018033577136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ecae2a3-416a-4e43-86b4-326e08c99bf3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.034079518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a2c54fa-6b1d-4535-848a-e418f68ce38a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.034151359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a2c54fa-6b1d-4535-848a-e418f68ce38a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.034398627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721175803248450444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667828411216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c,PodSandboxId:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667809002075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bef5d657a6cb69965245c2615be216b56d82ab4763232390ed306790434354,PodSandboxId:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721175667689999819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721175655675663031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175653
514923868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5,PodSandboxId:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172117563523
0999243,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175633405344562,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175633392337218,Labels:map[string]string{io.kubernetes.container.name: et
cd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c,PodSandboxId:bd261c9ae650e8f175c47bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175633365293199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff,PodSandboxId:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175633277908414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a2c54fa-6b1d-4535-848a-e418f68ce38a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.072314045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0953872a-5808-4ed9-a327-995fd8903816 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.072403566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0953872a-5808-4ed9-a327-995fd8903816 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.073896012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=daa55e74-5e1b-4055-ac5a-0edcd3ea0d30 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.074357039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176018074333843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=daa55e74-5e1b-4055-ac5a-0edcd3ea0d30 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.074890530Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcc83a96-5d71-4e2f-a04c-c22719b6bd4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.074966352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcc83a96-5d71-4e2f-a04c-c22719b6bd4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.075215839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721175803248450444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667828411216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c,PodSandboxId:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667809002075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bef5d657a6cb69965245c2615be216b56d82ab4763232390ed306790434354,PodSandboxId:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721175667689999819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721175655675663031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175653
514923868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5,PodSandboxId:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172117563523
0999243,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175633405344562,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175633392337218,Labels:map[string]string{io.kubernetes.container.name: et
cd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c,PodSandboxId:bd261c9ae650e8f175c47bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175633365293199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff,PodSandboxId:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175633277908414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcc83a96-5d71-4e2f-a04c-c22719b6bd4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.096380283Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d7ea5910-1e33-479e-87e9-7e0106ac20f6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.096658608Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-sxdsp,Uid:7a532a93-0ab1-4911-b7f5-9d85eda2be75,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175801959385834,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:23:21.627315007Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xftzx,Uid:01fe6b06-0568-4da7-bd0c-1883bc99995c,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721175667543103356,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:21:07.214009072Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7wsqq,Uid:4a433e03-decb-405d-82f1-b14a72412c8a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175667539564286,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-07-17T00:21:07.213868280Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0aa1050a-43e1-4f7a-a2df-80cafb48e673,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175667516880822,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T00:21:07.209314242Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&PodSandboxMetadata{Name:kube-proxy-7p2jl,Uid:74f5aff6-5e99-4cfe-af04-94198e8d9616,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175653220303170,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-17T00:20:52.887845117Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&PodSandboxMetadata{Name:kindnet-5lrdt,Uid:bd3c879a-726b-40ed-ba4f-897bf43cda26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175653218014976,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:20:52.903992589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&PodSandboxMetadata{Name:etcd-ha-565881,Uid:5f82fe075280b90a17d8f04a23fc7629,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721175633118938420,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.238:2379,kubernetes.io/config.hash: 5f82fe075280b90a17d8f04a23fc7629,kubernetes.io/config.seen: 2024-07-17T00:20:32.635373262Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-565881,Uid:137a148a990fa52e8281e355098ea021,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175633108917952,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a99
0fa52e8281e355098ea021,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.238:8443,kubernetes.io/config.hash: 137a148a990fa52e8281e355098ea021,kubernetes.io/config.seen: 2024-07-17T00:20:32.635374808Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-565881,Uid:22442ecb09ab7532c1c9a7afada397a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175633104462537,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{kubernetes.io/config.hash: 22442ecb09ab7532c1c9a7afada397a4,kubernetes.io/config.seen: 2024-07-17T00:20:32.635371479Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bd261c9ae650e8f175c4
7bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-565881,Uid:960ed960c6610568e154d20884b393df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175633099458806,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 960ed960c6610568e154d20884b393df,kubernetes.io/config.seen: 2024-07-17T00:20:32.635376290Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565881,Uid:b826e45ce780868932f8d9a5a17c6b9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175633092007394,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b826e45ce780868932f8d9a5a17c6b9c,kubernetes.io/config.seen: 2024-07-17T00:20:32.635367069Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d7ea5910-1e33-479e-87e9-7e0106ac20f6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.097803508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15f3cb18-7bbf-40e9-9af2-a1c7e6574ff0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.097876245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15f3cb18-7bbf-40e9-9af2-a1c7e6574ff0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.098279034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721175803248450444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667828411216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c,PodSandboxId:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667809002075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bef5d657a6cb69965245c2615be216b56d82ab4763232390ed306790434354,PodSandboxId:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721175667689999819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721175655675663031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175653
514923868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5,PodSandboxId:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172117563523
0999243,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175633405344562,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175633392337218,Labels:map[string]string{io.kubernetes.container.name: et
cd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c,PodSandboxId:bd261c9ae650e8f175c47bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175633365293199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff,PodSandboxId:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175633277908414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15f3cb18-7bbf-40e9-9af2-a1c7e6574ff0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.129638289Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19c575fe-8205-4556-bafa-464f7e80e660 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.129768501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19c575fe-8205-4556-bafa-464f7e80e660 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.131285375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd81b714-a398-4f08-a83f-b005a64f0b33 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.131878920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176018131847151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd81b714-a398-4f08-a83f-b005a64f0b33 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.132464353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb87f120-0878-46a9-bd99-11d8345fb90c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.132556842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb87f120-0878-46a9-bd99-11d8345fb90c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:26:58 ha-565881 crio[679]: time="2024-07-17 00:26:58.132898016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721175803248450444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667828411216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c,PodSandboxId:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667809002075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bef5d657a6cb69965245c2615be216b56d82ab4763232390ed306790434354,PodSandboxId:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721175667689999819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721175655675663031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175653
514923868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5,PodSandboxId:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172117563523
0999243,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175633405344562,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175633392337218,Labels:map[string]string{io.kubernetes.container.name: et
cd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c,PodSandboxId:bd261c9ae650e8f175c47bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175633365293199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff,PodSandboxId:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175633277908414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb87f120-0878-46a9-bd99-11d8345fb90c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	28b495a055524       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e0bd927bf2760       busybox-fc5497c4f-sxdsp
	928ee85bf546b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   f688446a5f59c       coredns-7db6d8ff4d-7wsqq
	cda0c9ceea230       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   f467ed059c56c       coredns-7db6d8ff4d-xftzx
	52bef5d657a6c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   764ba5023d3ee       storage-provisioner
	52b45808cde82       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    6 minutes ago       Running             kindnet-cni               0                   5c5494014c8b1       kindnet-5lrdt
	e572bb9aec2e8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      6 minutes ago       Running             kube-proxy                0                   12f43031f4b04       kube-proxy-7p2jl
	14c44e183ef1f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   bc50d045ef7cd       kube-vip-ha-565881
	1ec015ce8f841       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      6 minutes ago       Running             kube-scheduler            0                   a6e2148781333       kube-scheduler-ha-565881
	ab8577693652f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   afbb712100717       etcd-ha-565881
	2735221f6ad7f       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      6 minutes ago       Running             kube-controller-manager   0                   bd261c9ae650e       kube-controller-manager-ha-565881
	c44889c22020b       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      6 minutes ago       Running             kube-apiserver            0                   783f00b872a66       kube-apiserver-ha-565881
	
	
	==> coredns [928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519] <==
	[INFO] 10.244.2.2:44448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000657716s
	[INFO] 10.244.2.2:51292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.019019727s
	[INFO] 10.244.1.2:56403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158179s
	[INFO] 10.244.1.2:35250 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000142805s
	[INFO] 10.244.1.2:40336 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002059439s
	[INFO] 10.244.0.4:37111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137796s
	[INFO] 10.244.0.4:38097 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000091196s
	[INFO] 10.244.0.4:41409 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000489883s
	[INFO] 10.244.0.4:47790 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002006429s
	[INFO] 10.244.2.2:36117 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220718s
	[INFO] 10.244.2.2:57319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118439s
	[INFO] 10.244.1.2:60677 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002037782s
	[INFO] 10.244.0.4:57531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130517s
	[INFO] 10.244.0.4:53255 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001911233s
	[INFO] 10.244.0.4:50878 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001515166s
	[INFO] 10.244.0.4:59609 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005296s
	[INFO] 10.244.0.4:41601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174604s
	[INFO] 10.244.2.2:54282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144175s
	[INFO] 10.244.2.2:33964 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000291713s
	[INFO] 10.244.2.2:38781 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098409s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132708s
	[INFO] 10.244.2.2:42857 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129277s
	[INFO] 10.244.2.2:45518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176537s
	[INFO] 10.244.1.2:38437 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111768s
	[INFO] 10.244.1.2:41860 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000210674s
	
	
	==> coredns [cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c] <==
	[INFO] 10.244.1.2:55200 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103152s
	[INFO] 10.244.1.2:37940 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070078s
	[INFO] 10.244.1.2:48078 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001304627s
	[INFO] 10.244.1.2:45924 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156493s
	[INFO] 10.244.1.2:43327 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095286s
	[INFO] 10.244.1.2:49398 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142472s
	[INFO] 10.244.0.4:55102 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007626s
	[INFO] 10.244.0.4:47068 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069112s
	[INFO] 10.244.0.4:33535 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071265s
	[INFO] 10.244.2.2:46044 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143827s
	[INFO] 10.244.1.2:35109 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129619s
	[INFO] 10.244.1.2:48280 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075012s
	[INFO] 10.244.1.2:56918 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057676s
	[INFO] 10.244.0.4:36784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195227s
	[INFO] 10.244.0.4:42172 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072797s
	[INFO] 10.244.0.4:38471 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054713s
	[INFO] 10.244.0.4:55016 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052519s
	[INFO] 10.244.2.2:35590 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286422s
	[INFO] 10.244.2.2:40026 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000371873s
	[INFO] 10.244.1.2:41980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000310548s
	[INFO] 10.244.1.2:46445 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000266363s
	[INFO] 10.244.0.4:35492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100381s
	[INFO] 10.244.0.4:42544 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00004087s
	[INFO] 10.244.0.4:35643 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111779s
	[INFO] 10.244.0.4:38933 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000030463s
	
	
	==> describe nodes <==
	Name:               ha-565881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_20_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:20:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:21:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-565881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6899f2542334306bf4c50f49702dfb5
	  System UUID:                c6899f25-4233-4306-bf4c-50f49702dfb5
	  Boot ID:                    f5b041e8-ae19-4f7a-ac0d-a039fbca796b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sxdsp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 coredns-7db6d8ff4d-7wsqq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m6s
	  kube-system                 coredns-7db6d8ff4d-xftzx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m5s
	  kube-system                 etcd-ha-565881                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m19s
	  kube-system                 kindnet-5lrdt                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-apiserver-ha-565881             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-ha-565881    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-proxy-7p2jl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-scheduler-ha-565881             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-vip-ha-565881                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m4s   kube-proxy       
	  Normal  Starting                 6m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-565881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-565881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-565881 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m7s   node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal  NodeReady                5m51s  kubelet          Node ha-565881 status is now: NodeReady
	  Normal  RegisteredNode           5m     node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal  RegisteredNode           3m45s  node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	
	
	Name:               ha-565881-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_21_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:21:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:24:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-565881-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 002cfcb8afdc450f9dbf024dbe1dd968
	  System UUID:                002cfcb8-afdc-450f-9dbf-024dbe1dd968
	  Boot ID:                    e960dff3-4ffd-424d-9228-f77aa5cf198a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rdpwj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 etcd-ha-565881-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m17s
	  kube-system                 kindnet-k882n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m19s
	  kube-system                 kube-apiserver-ha-565881-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-controller-manager-ha-565881-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-proxy-2f9rj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-scheduler-ha-565881-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-vip-ha-565881-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-565881-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-565881-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-565881-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           5m                     node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-565881-m02 status is now: NodeNotReady
	
	
	Name:               ha-565881-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_22_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:22:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:26:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:23:26 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:23:26 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:23:26 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:23:26 +0000   Wed, 17 Jul 2024 00:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-565881-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d0000c1f74247c095cd9247f3f0c350
	  System UUID:                3d0000c1-f742-47c0-95cd-9247f3f0c350
	  Boot ID:                    4fa63eff-e26e-4a4c-8360-5dc73aba6ea0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lmz4q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 etcd-ha-565881-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m1s
	  kube-system                 kindnet-ctstx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-ha-565881-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-controller-manager-ha-565881-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-proxy-k5x6x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-ha-565881-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-vip-ha-565881-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node ha-565881-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node ha-565881-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node ha-565881-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal  RegisteredNode           3m45s                node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	
	
	Name:               ha-565881-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_23_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:23:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:26:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:23:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:23:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:23:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:24:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-565881-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008ae63d929d475b8bab60c832202ce9
	  System UUID:                008ae63d-929d-475b-8bab-60c832202ce9
	  Boot ID:                    3540bc22-336a-438e-8b63-852810ced32c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-xz7nj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-p5xml    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m54s            kube-proxy       
	  Normal  RegisteredNode           3m               node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-565881-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-565881-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-565881-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  RegisteredNode           2m55s            node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeReady                2m41s            kubelet          Node ha-565881-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul17 00:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049979] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040150] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.513897] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.375698] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.513665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.825427] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057593] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065677] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.195559] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.109938] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261884] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129275] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.597572] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.062309] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075955] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.082514] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.034910] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 00:21] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.822749] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36] <==
	{"level":"warn","ts":"2024-07-17T00:26:58.428439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.435555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.438287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.450573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.451393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.453531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.457648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.460061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.465973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.469348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.479964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.489465Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.499436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.504319Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.507524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.512909Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.51954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.528995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.537029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.540805Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.545438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.554014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.562864Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.569457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:26:58.613018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:26:58 up 6 min,  0 users,  load average: 0.17, 0.18, 0.10
	Linux ha-565881 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146] <==
	I0717 00:26:26.729193       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:26:36.724424       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:26:36.724546       1 main.go:303] handling current node
	I0717 00:26:36.724580       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:26:36.724599       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:26:36.724896       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:26:36.725001       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:26:36.725107       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:26:36.725130       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:26:46.731063       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:26:46.731098       1 main.go:303] handling current node
	I0717 00:26:46.731111       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:26:46.731116       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:26:46.731297       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:26:46.731325       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:26:46.731379       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:26:46.731401       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:26:56.722384       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:26:56.722476       1 main.go:303] handling current node
	I0717 00:26:56.722508       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:26:56.722527       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:26:56.722796       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:26:56.722839       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:26:56.722924       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:26:56.722943       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff] <==
	I0717 00:20:38.428497       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:20:38.441536       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238]
	I0717 00:20:38.442661       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:20:38.448071       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:20:38.632419       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 00:20:39.609562       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:20:39.639817       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:20:39.666647       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:20:52.735821       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 00:20:52.834990       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 00:23:25.816126       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37258: use of closed network connection
	E0717 00:23:26.006470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37288: use of closed network connection
	E0717 00:23:26.398340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37324: use of closed network connection
	E0717 00:23:26.575363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37338: use of closed network connection
	E0717 00:23:26.756657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37350: use of closed network connection
	E0717 00:23:26.948378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37364: use of closed network connection
	E0717 00:23:27.143312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37386: use of closed network connection
	E0717 00:23:27.325862       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37414: use of closed network connection
	E0717 00:23:27.613922       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37442: use of closed network connection
	E0717 00:23:27.795829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37464: use of closed network connection
	E0717 00:23:27.979849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37486: use of closed network connection
	E0717 00:23:28.145679       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37498: use of closed network connection
	E0717 00:23:28.327975       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37520: use of closed network connection
	E0717 00:23:28.507457       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37544: use of closed network connection
	W0717 00:24:58.447330       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.97]
	
	
	==> kube-controller-manager [2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c] <==
	I0717 00:23:21.940022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="201.725283ms"
	I0717 00:23:22.153168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="212.381321ms"
	I0717 00:23:22.197631       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.425921ms"
	I0717 00:23:22.215343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.665283ms"
	I0717 00:23:22.216352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.69µs"
	I0717 00:23:22.323817       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="317.447µs"
	I0717 00:23:23.345654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.841373ms"
	I0717 00:23:23.345858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.507µs"
	I0717 00:23:23.372371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.161µs"
	I0717 00:23:23.372774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.276µs"
	I0717 00:23:23.391087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.196µs"
	I0717 00:23:23.402857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.302µs"
	I0717 00:23:23.406873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.996µs"
	I0717 00:23:23.424098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.913µs"
	I0717 00:23:24.299215       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.816356ms"
	I0717 00:23:24.299638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.466µs"
	I0717 00:23:25.347987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.734712ms"
	I0717 00:23:25.348235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.862µs"
	I0717 00:23:58.654038       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-565881-m04\" does not exist"
	I0717 00:23:58.795421       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565881-m04" podCIDRs=["10.244.3.0/24"]
	I0717 00:24:01.920061       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565881-m04"
	I0717 00:24:17.499789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565881-m04"
	I0717 00:25:13.761655       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565881-m04"
	I0717 00:25:13.977677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.715247ms"
	I0717 00:25:13.979128       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="435.771µs"
	
	
	==> kube-proxy [e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f] <==
	I0717 00:20:53.864763       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:20:53.887642       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	I0717 00:20:53.970646       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:20:53.970727       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:20:53.970745       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:20:53.973519       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:20:53.973945       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:20:53.973980       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:20:53.975966       1 config.go:192] "Starting service config controller"
	I0717 00:20:53.977488       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:20:53.977564       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:20:53.977586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:20:53.978775       1 config.go:319] "Starting node config controller"
	I0717 00:20:53.978827       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:20:54.078104       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:20:54.079286       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:20:54.081344       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6] <==
	W0717 00:20:37.874563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:20:37.874615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:20:38.003436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:20:38.003486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:20:38.057021       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:20:38.057073       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:20:40.978638       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 00:22:55.359047       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bmbng\": pod kube-proxy-bmbng is already assigned to node \"ha-565881-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bmbng" node="ha-565881-m03"
	E0717 00:22:55.359248       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3e8023d6-ad43-4db7-a250-b93a258d64d4(kube-system/kube-proxy-bmbng) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bmbng"
	E0717 00:22:55.359272       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bmbng\": pod kube-proxy-bmbng is already assigned to node \"ha-565881-m03\"" pod="kube-system/kube-proxy-bmbng"
	I0717 00:22:55.359312       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bmbng" node="ha-565881-m03"
	E0717 00:23:21.624179       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-n7vc5\": pod busybox-fc5497c4f-n7vc5 is already assigned to node \"ha-565881-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-n7vc5" node="ha-565881-m02"
	E0717 00:23:21.624350       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5b78d075-375f-4f69-8471-5d953de0d009(default/busybox-fc5497c4f-n7vc5) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-n7vc5"
	E0717 00:23:21.624402       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-n7vc5\": pod busybox-fc5497c4f-n7vc5 is already assigned to node \"ha-565881-m02\"" pod="default/busybox-fc5497c4f-n7vc5"
	I0717 00:23:21.624441       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-n7vc5" node="ha-565881-m02"
	E0717 00:23:58.823462       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xz7nj\": pod kindnet-xz7nj is already assigned to node \"ha-565881-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xz7nj" node="ha-565881-m04"
	E0717 00:23:58.823582       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xz7nj\": pod kindnet-xz7nj is already assigned to node \"ha-565881-m04\"" pod="kube-system/kindnet-xz7nj"
	E0717 00:23:58.897275       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-njsjv\": pod kube-proxy-njsjv is already assigned to node \"ha-565881-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-njsjv" node="ha-565881-m04"
	E0717 00:23:58.897422       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0521a710-eba8-4a60-89ab-3d97d26fa540(kube-system/kube-proxy-njsjv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-njsjv"
	E0717 00:23:58.897446       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-njsjv\": pod kube-proxy-njsjv is already assigned to node \"ha-565881-m04\"" pod="kube-system/kube-proxy-njsjv"
	I0717 00:23:58.897468       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-njsjv" node="ha-565881-m04"
	E0717 00:23:58.899913       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r6sqd\": pod kindnet-r6sqd is already assigned to node \"ha-565881-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r6sqd" node="ha-565881-m04"
	E0717 00:23:58.900001       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c458e5d6-fe79-40d8-bdea-1bd3aade37d2(kube-system/kindnet-r6sqd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-r6sqd"
	E0717 00:23:58.900023       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r6sqd\": pod kindnet-r6sqd is already assigned to node \"ha-565881-m04\"" pod="kube-system/kindnet-r6sqd"
	I0717 00:23:58.900048       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r6sqd" node="ha-565881-m04"
	
	
	==> kubelet <==
	Jul 17 00:22:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:22:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:23:21 ha-565881 kubelet[1370]: I0717 00:23:21.625911    1370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xftzx" podStartSLOduration=148.625777817 podStartE2EDuration="2m28.625777817s" podCreationTimestamp="2024-07-17 00:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 00:21:08.787794401 +0000 UTC m=+29.397467026" watchObservedRunningTime="2024-07-17 00:23:21.625777817 +0000 UTC m=+162.235450440"
	Jul 17 00:23:21 ha-565881 kubelet[1370]: I0717 00:23:21.627774    1370 topology_manager.go:215] "Topology Admit Handler" podUID="7a532a93-0ab1-4911-b7f5-9d85eda2be75" podNamespace="default" podName="busybox-fc5497c4f-sxdsp"
	Jul 17 00:23:21 ha-565881 kubelet[1370]: I0717 00:23:21.746641    1370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzzd\" (UniqueName: \"kubernetes.io/projected/7a532a93-0ab1-4911-b7f5-9d85eda2be75-kube-api-access-gzzzd\") pod \"busybox-fc5497c4f-sxdsp\" (UID: \"7a532a93-0ab1-4911-b7f5-9d85eda2be75\") " pod="default/busybox-fc5497c4f-sxdsp"
	Jul 17 00:23:39 ha-565881 kubelet[1370]: E0717 00:23:39.574530    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:23:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:23:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:23:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:23:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:24:39 ha-565881 kubelet[1370]: E0717 00:24:39.572945    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:24:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:24:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:24:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:24:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:25:39 ha-565881 kubelet[1370]: E0717 00:25:39.587771    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:25:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:25:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:25:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:25:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:26:39 ha-565881 kubelet[1370]: E0717 00:26:39.582816    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:26:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:26:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:26:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:26:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565881 -n ha-565881
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 3 (3.191871001s)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:27:03.130183   35626 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:27:03.130395   35626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:03.130404   35626 out.go:304] Setting ErrFile to fd 2...
	I0717 00:27:03.130409   35626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:03.130580   35626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:27:03.130748   35626 out.go:298] Setting JSON to false
	I0717 00:27:03.130773   35626 mustload.go:65] Loading cluster: ha-565881
	I0717 00:27:03.130830   35626 notify.go:220] Checking for updates...
	I0717 00:27:03.131318   35626 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:27:03.131340   35626 status.go:255] checking status of ha-565881 ...
	I0717 00:27:03.131756   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:03.131821   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:03.151483   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I0717 00:27:03.151920   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:03.152517   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:03.152546   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:03.152909   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:03.153134   35626 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:27:03.154640   35626 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:27:03.154653   35626 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:03.154929   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:03.154963   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:03.170002   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0717 00:27:03.170401   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:03.170890   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:03.170926   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:03.171206   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:03.171362   35626 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:27:03.174360   35626 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:03.174789   35626 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:03.174825   35626 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:03.174976   35626 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:03.175302   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:03.175348   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:03.190536   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0717 00:27:03.190953   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:03.191449   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:03.191477   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:03.191746   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:03.191910   35626 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:27:03.192101   35626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:03.192138   35626 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:27:03.194761   35626 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:03.195141   35626 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:03.195167   35626 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:03.195285   35626 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:27:03.195437   35626 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:27:03.195585   35626 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:27:03.195683   35626 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:27:03.280776   35626 ssh_runner.go:195] Run: systemctl --version
	I0717 00:27:03.287040   35626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:03.302006   35626 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:03.302039   35626 api_server.go:166] Checking apiserver status ...
	I0717 00:27:03.302081   35626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:03.316385   35626 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:27:03.326090   35626 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:03.326148   35626 ssh_runner.go:195] Run: ls
	I0717 00:27:03.330289   35626 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:03.337031   35626 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:03.337051   35626 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:27:03.337063   35626 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:03.337082   35626 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:27:03.337420   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:03.337461   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:03.352789   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0717 00:27:03.353266   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:03.353745   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:03.353771   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:03.354113   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:03.354273   35626 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:27:03.355792   35626 status.go:330] ha-565881-m02 host status = "Running" (err=<nil>)
	I0717 00:27:03.355810   35626 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:03.356203   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:03.356244   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:03.370918   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I0717 00:27:03.371291   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:03.371707   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:03.371728   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:03.371992   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:03.372161   35626 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:27:03.374716   35626 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:03.375181   35626 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:03.375209   35626 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:03.375330   35626 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:03.375636   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:03.375678   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:03.389788   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37247
	I0717 00:27:03.390149   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:03.390544   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:03.390562   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:03.390823   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:03.391011   35626 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:27:03.391153   35626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:03.391175   35626 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:27:03.393890   35626 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:03.394320   35626 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:03.394352   35626 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:03.394510   35626 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:27:03.394685   35626 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:27:03.394820   35626 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:27:03.394954   35626 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	W0717 00:27:05.936836   35626 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:27:05.936935   35626 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0717 00:27:05.936948   35626 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:05.936961   35626 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:27:05.936977   35626 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:05.936987   35626 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:27:05.937274   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:05.937311   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:05.952295   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46389
	I0717 00:27:05.952793   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:05.953248   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:05.953269   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:05.953569   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:05.953761   35626 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:27:05.955358   35626 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:27:05.955372   35626 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:05.955699   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:05.955742   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:05.970505   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0717 00:27:05.970903   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:05.971372   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:05.971389   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:05.971684   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:05.971914   35626 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:27:05.974843   35626 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:05.975245   35626 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:05.975263   35626 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:05.975480   35626 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:05.975777   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:05.975835   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:05.991156   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38645
	I0717 00:27:05.991636   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:05.992076   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:05.992101   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:05.992369   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:05.992551   35626 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:27:05.992752   35626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:05.992770   35626 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:27:05.995355   35626 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:05.995797   35626 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:05.995825   35626 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:05.995964   35626 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:27:05.996121   35626 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:27:05.996263   35626 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:27:05.996371   35626 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:27:06.080208   35626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:06.096273   35626 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:06.096300   35626 api_server.go:166] Checking apiserver status ...
	I0717 00:27:06.096338   35626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:06.111012   35626 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:27:06.120669   35626 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:06.120721   35626 ssh_runner.go:195] Run: ls
	I0717 00:27:06.125129   35626 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:06.129473   35626 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:06.129492   35626 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:27:06.129500   35626 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:06.129524   35626 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:27:06.129799   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:06.129829   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:06.144617   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0717 00:27:06.144971   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:06.145396   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:06.145418   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:06.145736   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:06.145890   35626 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:27:06.147482   35626 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:27:06.147495   35626 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:06.147753   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:06.147787   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:06.162377   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I0717 00:27:06.162770   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:06.163242   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:06.163259   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:06.163603   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:06.163751   35626 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:27:06.166433   35626 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:06.166847   35626 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:06.166865   35626 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:06.167003   35626 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:06.167287   35626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:06.167320   35626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:06.183393   35626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0717 00:27:06.183754   35626 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:06.184161   35626 main.go:141] libmachine: Using API Version  1
	I0717 00:27:06.184186   35626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:06.184470   35626 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:06.184606   35626 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:27:06.184767   35626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:06.184784   35626 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:27:06.187461   35626 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:06.187874   35626 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:06.187901   35626 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:06.188066   35626 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:27:06.188192   35626 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:27:06.188390   35626 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:27:06.188512   35626 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:27:06.268089   35626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:06.281344   35626 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 3 (5.498322852s)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:27:06.983513   35727 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:27:06.983723   35727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:06.983731   35727 out.go:304] Setting ErrFile to fd 2...
	I0717 00:27:06.983735   35727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:06.983893   35727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:27:06.984040   35727 out.go:298] Setting JSON to false
	I0717 00:27:06.984065   35727 mustload.go:65] Loading cluster: ha-565881
	I0717 00:27:06.984150   35727 notify.go:220] Checking for updates...
	I0717 00:27:06.984426   35727 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:27:06.984438   35727 status.go:255] checking status of ha-565881 ...
	I0717 00:27:06.984820   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:06.984880   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:07.003436   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45519
	I0717 00:27:07.003921   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:07.004600   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:07.004633   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:07.004980   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:07.005186   35727 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:27:07.006786   35727 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:27:07.006801   35727 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:07.007093   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:07.007131   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:07.024070   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0717 00:27:07.024470   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:07.025026   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:07.025044   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:07.025359   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:07.025630   35727 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:27:07.028510   35727 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:07.029046   35727 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:07.029077   35727 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:07.029150   35727 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:07.029435   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:07.029474   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:07.045040   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0717 00:27:07.045409   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:07.045828   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:07.045853   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:07.046230   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:07.046429   35727 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:27:07.046618   35727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:07.046641   35727 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:27:07.049804   35727 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:07.050225   35727 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:07.050263   35727 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:07.050355   35727 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:27:07.050540   35727 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:27:07.050684   35727 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:27:07.050833   35727 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:27:07.137382   35727 ssh_runner.go:195] Run: systemctl --version
	I0717 00:27:07.143898   35727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:07.158880   35727 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:07.158912   35727 api_server.go:166] Checking apiserver status ...
	I0717 00:27:07.158949   35727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:07.176703   35727 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:27:07.186073   35727 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:07.186160   35727 ssh_runner.go:195] Run: ls
	I0717 00:27:07.190739   35727 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:07.195243   35727 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:07.195268   35727 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:27:07.195276   35727 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:07.195290   35727 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:27:07.195600   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:07.195630   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:07.211382   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43767
	I0717 00:27:07.211803   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:07.212349   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:07.212375   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:07.212752   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:07.212960   35727 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:27:07.214654   35727 status.go:330] ha-565881-m02 host status = "Running" (err=<nil>)
	I0717 00:27:07.214670   35727 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:07.214968   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:07.215009   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:07.229626   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I0717 00:27:07.229968   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:07.230404   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:07.230426   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:07.230707   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:07.230886   35727 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:27:07.233540   35727 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:07.233967   35727 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:07.234005   35727 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:07.234154   35727 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:07.234447   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:07.234509   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:07.248903   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35639
	I0717 00:27:07.249329   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:07.249805   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:07.249827   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:07.250175   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:07.250349   35727 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:27:07.250540   35727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:07.250560   35727 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:27:07.253266   35727 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:07.253717   35727 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:07.253742   35727 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:07.253873   35727 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:27:07.254053   35727 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:27:07.254180   35727 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:27:07.254320   35727 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	W0717 00:27:09.008921   35727 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:09.008973   35727 retry.go:31] will retry after 152.94455ms: dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:27:12.084807   35727 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:27:12.084882   35727 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0717 00:27:12.084899   35727 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:12.084909   35727 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:27:12.084925   35727 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:12.084932   35727 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:27:12.085333   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:12.085379   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:12.100809   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43987
	I0717 00:27:12.101307   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:12.101866   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:12.101898   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:12.102236   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:12.102435   35727 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:27:12.104083   35727 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:27:12.104103   35727 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:12.104509   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:12.104592   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:12.119707   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0717 00:27:12.120125   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:12.120545   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:12.120593   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:12.120934   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:12.121127   35727 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:27:12.123954   35727 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:12.124400   35727 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:12.124425   35727 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:12.124532   35727 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:12.124841   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:12.124884   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:12.141074   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38327
	I0717 00:27:12.141639   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:12.142126   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:12.142145   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:12.142422   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:12.142604   35727 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:27:12.142805   35727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:12.142832   35727 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:27:12.145416   35727 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:12.145797   35727 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:12.145824   35727 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:12.145950   35727 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:27:12.146099   35727 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:27:12.146281   35727 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:27:12.146426   35727 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:27:12.232705   35727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:12.249512   35727 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:12.249544   35727 api_server.go:166] Checking apiserver status ...
	I0717 00:27:12.249583   35727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:12.265081   35727 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:27:12.276117   35727 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:12.276176   35727 ssh_runner.go:195] Run: ls
	I0717 00:27:12.281125   35727 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:12.285338   35727 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:12.285358   35727 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:27:12.285366   35727 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:12.285383   35727 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:27:12.285686   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:12.285722   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:12.301404   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0717 00:27:12.301814   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:12.302378   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:12.302408   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:12.302743   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:12.302996   35727 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:27:12.304417   35727 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:27:12.304430   35727 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:12.304736   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:12.304773   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:12.319615   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43745
	I0717 00:27:12.319994   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:12.320447   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:12.320469   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:12.320761   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:12.320931   35727 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:27:12.323368   35727 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:12.323765   35727 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:12.323792   35727 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:12.323928   35727 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:12.324270   35727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:12.324307   35727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:12.338790   35727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0717 00:27:12.339185   35727 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:12.339666   35727 main.go:141] libmachine: Using API Version  1
	I0717 00:27:12.339690   35727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:12.340032   35727 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:12.340227   35727 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:27:12.340456   35727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:12.340475   35727 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:27:12.343174   35727 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:12.343643   35727 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:12.343669   35727 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:12.343786   35727 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:27:12.343965   35727 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:27:12.344141   35727 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:27:12.344266   35727 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:27:12.424305   35727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:12.439537   35727 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0717 00:27:12.451293   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 3 (4.792730441s)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:27:14.157788   35844 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:27:14.157904   35844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:14.157914   35844 out.go:304] Setting ErrFile to fd 2...
	I0717 00:27:14.157920   35844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:14.158112   35844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:27:14.158294   35844 out.go:298] Setting JSON to false
	I0717 00:27:14.158324   35844 mustload.go:65] Loading cluster: ha-565881
	I0717 00:27:14.158398   35844 notify.go:220] Checking for updates...
	I0717 00:27:14.158730   35844 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:27:14.158750   35844 status.go:255] checking status of ha-565881 ...
	I0717 00:27:14.159158   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:14.159232   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:14.181860   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I0717 00:27:14.182261   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:14.182877   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:14.182901   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:14.183267   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:14.183502   35844 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:27:14.185371   35844 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:27:14.185389   35844 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:14.185696   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:14.185733   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:14.199976   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0717 00:27:14.200512   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:14.201035   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:14.201059   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:14.201351   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:14.201508   35844 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:27:14.203824   35844 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:14.204203   35844 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:14.204228   35844 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:14.204380   35844 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:14.204686   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:14.204725   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:14.220236   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0717 00:27:14.220628   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:14.221002   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:14.221026   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:14.221317   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:14.221492   35844 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:27:14.221706   35844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:14.221740   35844 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:27:14.224390   35844 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:14.224834   35844 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:14.224865   35844 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:14.224944   35844 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:27:14.225083   35844 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:27:14.225220   35844 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:27:14.225324   35844 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:27:14.308432   35844 ssh_runner.go:195] Run: systemctl --version
	I0717 00:27:14.315324   35844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:14.330373   35844 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:14.330398   35844 api_server.go:166] Checking apiserver status ...
	I0717 00:27:14.330430   35844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:14.345693   35844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:27:14.354648   35844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:14.354708   35844 ssh_runner.go:195] Run: ls
	I0717 00:27:14.359045   35844 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:14.363379   35844 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:14.363397   35844 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:27:14.363407   35844 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:14.363422   35844 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:27:14.363710   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:14.363749   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:14.378873   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0717 00:27:14.379315   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:14.380109   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:14.380139   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:14.380494   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:14.380683   35844 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:27:14.382129   35844 status.go:330] ha-565881-m02 host status = "Running" (err=<nil>)
	I0717 00:27:14.382145   35844 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:14.382427   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:14.382458   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:14.398059   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36491
	I0717 00:27:14.398438   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:14.398855   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:14.398872   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:14.399299   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:14.399501   35844 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:27:14.402428   35844 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:14.402850   35844 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:14.402879   35844 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:14.403024   35844 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:14.403316   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:14.403353   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:14.418089   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0717 00:27:14.418442   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:14.418917   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:14.418939   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:14.419236   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:14.419418   35844 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:27:14.419594   35844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:14.419614   35844 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:27:14.422282   35844 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:14.422664   35844 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:14.422687   35844 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:14.422847   35844 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:27:14.423015   35844 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:27:14.423181   35844 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:27:14.423299   35844 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	W0717 00:27:15.152788   35844 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:15.152846   35844 retry.go:31] will retry after 325.789943ms: dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:27:18.548832   35844 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:27:18.548955   35844 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0717 00:27:18.548990   35844 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:18.549000   35844 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:27:18.549022   35844 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:18.549032   35844 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:27:18.549550   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:18.549612   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:18.564769   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0717 00:27:18.565219   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:18.565683   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:18.565703   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:18.566027   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:18.566218   35844 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:27:18.567772   35844 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:27:18.567788   35844 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:18.568066   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:18.568099   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:18.582975   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36575
	I0717 00:27:18.583347   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:18.583727   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:18.583745   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:18.584036   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:18.584207   35844 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:27:18.586851   35844 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:18.587239   35844 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:18.587271   35844 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:18.587388   35844 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:18.587759   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:18.587806   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:18.602712   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0717 00:27:18.603212   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:18.603738   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:18.603763   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:18.604062   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:18.604226   35844 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:27:18.604433   35844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:18.604453   35844 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:27:18.607414   35844 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:18.607895   35844 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:18.607927   35844 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:18.608062   35844 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:27:18.608249   35844 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:27:18.608413   35844 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:27:18.608572   35844 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:27:18.692950   35844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:18.708402   35844 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:18.708433   35844 api_server.go:166] Checking apiserver status ...
	I0717 00:27:18.708473   35844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:18.728742   35844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:27:18.740632   35844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:18.740678   35844 ssh_runner.go:195] Run: ls
	I0717 00:27:18.745802   35844 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:18.750160   35844 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:18.750181   35844 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:27:18.750188   35844 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:18.750209   35844 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:27:18.750500   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:18.750529   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:18.765281   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0717 00:27:18.765730   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:18.766157   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:18.766179   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:18.766495   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:18.766659   35844 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:27:18.768016   35844 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:27:18.768030   35844 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:18.768287   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:18.768333   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:18.782301   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0717 00:27:18.782625   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:18.783106   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:18.783127   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:18.783389   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:18.783560   35844 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:27:18.786296   35844 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:18.786761   35844 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:18.786796   35844 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:18.786901   35844 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:18.787188   35844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:18.787222   35844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:18.801220   35844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0717 00:27:18.801527   35844 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:18.801960   35844 main.go:141] libmachine: Using API Version  1
	I0717 00:27:18.801993   35844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:18.802301   35844 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:18.802561   35844 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:27:18.802753   35844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:18.802774   35844 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:27:18.805182   35844 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:18.805589   35844 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:18.805612   35844 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:18.805786   35844 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:27:18.805924   35844 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:27:18.806063   35844 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:27:18.806174   35844 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:27:18.891613   35844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:18.907748   35844 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 3 (4.221227039s)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:27:21.154265   35943 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:27:21.154488   35943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:21.154496   35943 out.go:304] Setting ErrFile to fd 2...
	I0717 00:27:21.154500   35943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:21.154672   35943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:27:21.154819   35943 out.go:298] Setting JSON to false
	I0717 00:27:21.154850   35943 mustload.go:65] Loading cluster: ha-565881
	I0717 00:27:21.154890   35943 notify.go:220] Checking for updates...
	I0717 00:27:21.155216   35943 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:27:21.155229   35943 status.go:255] checking status of ha-565881 ...
	I0717 00:27:21.155573   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:21.155629   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:21.174629   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0717 00:27:21.175101   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:21.175773   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:21.175810   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:21.176135   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:21.176289   35943 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:27:21.178100   35943 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:27:21.178119   35943 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:21.178421   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:21.178455   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:21.193863   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0717 00:27:21.194237   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:21.194705   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:21.194739   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:21.195064   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:21.195273   35943 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:27:21.197780   35943 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:21.198227   35943 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:21.198249   35943 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:21.198387   35943 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:21.198787   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:21.198833   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:21.214045   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0717 00:27:21.214487   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:21.215005   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:21.215023   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:21.215383   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:21.215584   35943 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:27:21.215785   35943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:21.215819   35943 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:27:21.218787   35943 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:21.219212   35943 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:21.219242   35943 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:21.219378   35943 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:27:21.219524   35943 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:27:21.219656   35943 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:27:21.219785   35943 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:27:21.305170   35943 ssh_runner.go:195] Run: systemctl --version
	I0717 00:27:21.310981   35943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:21.325479   35943 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:21.325520   35943 api_server.go:166] Checking apiserver status ...
	I0717 00:27:21.325563   35943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:21.339579   35943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:27:21.349084   35943 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:21.349138   35943 ssh_runner.go:195] Run: ls
	I0717 00:27:21.353509   35943 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:21.360098   35943 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:21.360117   35943 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:27:21.360126   35943 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:21.360150   35943 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:27:21.360417   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:21.360452   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:21.375101   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0717 00:27:21.375553   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:21.376046   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:21.376067   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:21.376368   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:21.376541   35943 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:27:21.378162   35943 status.go:330] ha-565881-m02 host status = "Running" (err=<nil>)
	I0717 00:27:21.378179   35943 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:21.378561   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:21.378603   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:21.394875   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I0717 00:27:21.395295   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:21.395781   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:21.395810   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:21.396113   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:21.396297   35943 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:27:21.399044   35943 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:21.399473   35943 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:21.399495   35943 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:21.399612   35943 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:21.399924   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:21.399965   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:21.414836   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35837
	I0717 00:27:21.415269   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:21.415692   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:21.415713   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:21.416062   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:21.416269   35943 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:27:21.416453   35943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:21.416474   35943 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:27:21.419674   35943 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:21.420078   35943 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:21.420102   35943 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:21.420321   35943 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:27:21.420511   35943 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:27:21.420684   35943 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:27:21.420922   35943 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	W0717 00:27:21.616792   35943 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:21.616847   35943 retry.go:31] will retry after 295.648011ms: dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:27:24.976782   35943 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:27:24.976857   35943 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0717 00:27:24.976870   35943 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:24.976877   35943 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:27:24.976909   35943 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:24.976923   35943 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:27:24.977250   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:24.977300   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:24.992490   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I0717 00:27:24.992909   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:24.993357   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:24.993378   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:24.993714   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:24.993884   35943 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:27:24.995349   35943 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:27:24.995364   35943 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:24.995635   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:24.995669   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:25.010069   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I0717 00:27:25.010479   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:25.010974   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:25.010989   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:25.011283   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:25.011480   35943 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:27:25.013991   35943 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:25.014342   35943 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:25.014379   35943 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:25.014500   35943 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:25.014799   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:25.014836   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:25.029269   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I0717 00:27:25.029582   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:25.030026   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:25.030045   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:25.030345   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:25.030508   35943 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:27:25.030644   35943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:25.030663   35943 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:27:25.033233   35943 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:25.033664   35943 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:25.033713   35943 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:25.033823   35943 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:27:25.033973   35943 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:27:25.034102   35943 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:27:25.034267   35943 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:27:25.120510   35943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:25.143893   35943 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:25.143931   35943 api_server.go:166] Checking apiserver status ...
	I0717 00:27:25.143975   35943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:25.158897   35943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:27:25.168269   35943 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:25.168315   35943 ssh_runner.go:195] Run: ls
	I0717 00:27:25.174012   35943 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:25.178145   35943 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:25.178166   35943 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:27:25.178175   35943 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:25.178189   35943 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:27:25.178472   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:25.178509   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:25.193738   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0717 00:27:25.194166   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:25.194767   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:25.194788   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:25.195088   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:25.195251   35943 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:27:25.196663   35943 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:27:25.196680   35943 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:25.197017   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:25.197055   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:25.212581   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
	I0717 00:27:25.213013   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:25.213458   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:25.213483   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:25.213789   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:25.213976   35943 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:27:25.217001   35943 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:25.217385   35943 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:25.217412   35943 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:25.217529   35943 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:25.217821   35943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:25.217862   35943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:25.232410   35943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0717 00:27:25.232865   35943 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:25.233288   35943 main.go:141] libmachine: Using API Version  1
	I0717 00:27:25.233305   35943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:25.233594   35943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:25.233761   35943 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:27:25.233953   35943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:25.233969   35943 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:27:25.236266   35943 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:25.236617   35943 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:25.236640   35943 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:25.236774   35943 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:27:25.236947   35943 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:27:25.237091   35943 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:27:25.237235   35943 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:27:25.320806   35943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:25.333978   35943 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 3 (3.702636188s)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:27:29.915029   36058 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:27:29.915256   36058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:29.915263   36058 out.go:304] Setting ErrFile to fd 2...
	I0717 00:27:29.915267   36058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:29.915431   36058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:27:29.915574   36058 out.go:298] Setting JSON to false
	I0717 00:27:29.915599   36058 mustload.go:65] Loading cluster: ha-565881
	I0717 00:27:29.915741   36058 notify.go:220] Checking for updates...
	I0717 00:27:29.915975   36058 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:27:29.915990   36058 status.go:255] checking status of ha-565881 ...
	I0717 00:27:29.916468   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:29.916513   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:29.936970   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0717 00:27:29.937313   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:29.937834   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:29.937856   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:29.938214   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:29.938450   36058 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:27:29.940113   36058 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:27:29.940130   36058 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:29.940414   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:29.940447   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:29.954663   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0717 00:27:29.955017   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:29.955489   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:29.955530   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:29.955851   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:29.956033   36058 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:27:29.958889   36058 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:29.959333   36058 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:29.959361   36058 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:29.959493   36058 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:29.959794   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:29.959824   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:29.974038   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0717 00:27:29.974528   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:29.975068   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:29.975093   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:29.975399   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:29.975538   36058 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:27:29.975704   36058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:29.975723   36058 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:27:29.978691   36058 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:29.979131   36058 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:29.979158   36058 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:29.979324   36058 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:27:29.979480   36058 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:27:29.979605   36058 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:27:29.979716   36058 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:27:30.064473   36058 ssh_runner.go:195] Run: systemctl --version
	I0717 00:27:30.070385   36058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:30.086312   36058 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:30.086340   36058 api_server.go:166] Checking apiserver status ...
	I0717 00:27:30.086373   36058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:30.101020   36058 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:27:30.111638   36058 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:30.111684   36058 ssh_runner.go:195] Run: ls
	I0717 00:27:30.116440   36058 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:30.120649   36058 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:30.120674   36058 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:27:30.120683   36058 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:30.120699   36058 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:27:30.121015   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:30.121046   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:30.136038   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0717 00:27:30.136388   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:30.136874   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:30.136897   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:30.137194   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:30.137377   36058 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:27:30.138798   36058 status.go:330] ha-565881-m02 host status = "Running" (err=<nil>)
	I0717 00:27:30.138816   36058 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:30.139098   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:30.139132   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:30.157341   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0717 00:27:30.157771   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:30.158389   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:30.158409   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:30.158706   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:30.158866   36058 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:27:30.161584   36058 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:30.161965   36058 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:30.161989   36058 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:30.162125   36058 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:27:30.162525   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:30.162568   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:30.176587   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40675
	I0717 00:27:30.176973   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:30.177431   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:30.177453   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:30.177801   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:30.177960   36058 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:27:30.178125   36058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:30.178144   36058 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:27:30.180492   36058 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:30.180883   36058 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:27:30.180915   36058 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:27:30.181060   36058 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:27:30.181229   36058 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:27:30.181366   36058 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:27:30.181499   36058 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	W0717 00:27:33.232802   36058 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:27:33.232890   36058 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0717 00:27:33.232902   36058 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:33.232909   36058 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:27:33.232924   36058 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:27:33.232931   36058 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:27:33.233236   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:33.233287   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:33.248190   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0717 00:27:33.248576   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:33.249044   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:33.249063   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:33.249435   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:33.249622   36058 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:27:33.251257   36058 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:27:33.251275   36058 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:33.251637   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:33.251705   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:33.266008   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35875
	I0717 00:27:33.266431   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:33.266893   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:33.266918   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:33.267212   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:33.267432   36058 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:27:33.270015   36058 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:33.270491   36058 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:33.270510   36058 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:33.270656   36058 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:33.270981   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:33.271019   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:33.286257   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44987
	I0717 00:27:33.286658   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:33.287168   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:33.287201   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:33.287474   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:33.287649   36058 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:27:33.287808   36058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:33.287834   36058 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:27:33.290554   36058 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:33.291028   36058 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:33.291050   36058 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:33.291210   36058 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:27:33.291374   36058 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:27:33.291540   36058 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:27:33.291653   36058 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:27:33.376121   36058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:33.391151   36058 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:33.391177   36058 api_server.go:166] Checking apiserver status ...
	I0717 00:27:33.391223   36058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:33.404882   36058 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:27:33.414225   36058 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:33.414279   36058 ssh_runner.go:195] Run: ls
	I0717 00:27:33.419517   36058 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:33.423994   36058 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:33.424015   36058 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:27:33.424022   36058 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:33.424036   36058 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:27:33.424308   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:33.424338   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:33.439661   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0717 00:27:33.440026   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:33.440488   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:33.440510   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:33.440879   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:33.441077   36058 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:27:33.442630   36058 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:27:33.442643   36058 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:33.442963   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:33.443004   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:33.457698   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0717 00:27:33.458104   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:33.458584   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:33.458607   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:33.458901   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:33.459103   36058 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:27:33.462060   36058 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:33.462543   36058 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:33.462570   36058 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:33.462691   36058 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:33.463059   36058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:33.463091   36058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:33.477007   36058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I0717 00:27:33.477356   36058 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:33.477728   36058 main.go:141] libmachine: Using API Version  1
	I0717 00:27:33.477744   36058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:33.478015   36058 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:33.478195   36058 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:27:33.478366   36058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:33.478383   36058 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:27:33.481007   36058 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:33.481422   36058 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:33.481456   36058 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:33.481585   36058 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:27:33.481747   36058 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:27:33.481916   36058 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:27:33.482086   36058 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:27:33.563641   36058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:33.577262   36058 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 7 (627.401972ms)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Stopping
	kubelet: Stopping
	apiserver: Stopping
	kubeconfig: Stopping
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:27:39.233538   36175 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:27:39.233749   36175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:39.233757   36175 out.go:304] Setting ErrFile to fd 2...
	I0717 00:27:39.233761   36175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:39.233920   36175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:27:39.234100   36175 out.go:298] Setting JSON to false
	I0717 00:27:39.234135   36175 mustload.go:65] Loading cluster: ha-565881
	I0717 00:27:39.234240   36175 notify.go:220] Checking for updates...
	I0717 00:27:39.234545   36175 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:27:39.234565   36175 status.go:255] checking status of ha-565881 ...
	I0717 00:27:39.234983   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.235041   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.253991   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0717 00:27:39.254405   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.254978   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.255024   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.255440   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.255664   36175 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:27:39.257691   36175 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:27:39.257718   36175 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:39.258122   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.258173   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.273277   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33217
	I0717 00:27:39.273641   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.274100   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.274122   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.274499   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.274682   36175 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:27:39.277337   36175 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:39.277740   36175 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:39.277776   36175 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:39.277899   36175 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:39.278177   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.278215   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.294356   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I0717 00:27:39.294676   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.295108   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.295128   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.295431   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.295603   36175 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:27:39.295749   36175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:39.295771   36175 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:27:39.298407   36175 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:39.298754   36175 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:39.298785   36175 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:39.298891   36175 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:27:39.299050   36175 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:27:39.299224   36175 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:27:39.299380   36175 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:27:39.388754   36175 ssh_runner.go:195] Run: systemctl --version
	I0717 00:27:39.395833   36175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:39.410708   36175 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:39.410731   36175 api_server.go:166] Checking apiserver status ...
	I0717 00:27:39.410756   36175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:39.426217   36175 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:27:39.436153   36175 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:39.436213   36175 ssh_runner.go:195] Run: ls
	I0717 00:27:39.442153   36175 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:39.448220   36175 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:39.448243   36175 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:27:39.448251   36175 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:39.448266   36175 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:27:39.448692   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.448741   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.463936   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I0717 00:27:39.464436   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.464899   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.464915   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.465228   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.465431   36175 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:27:39.466865   36175 status.go:330] ha-565881-m02 host status = "Stopping" (err=<nil>)
	I0717 00:27:39.466879   36175 status.go:343] host is not running, skipping remaining checks
	I0717 00:27:39.466885   36175 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Stopping Kubelet:Stopping APIServer:Stopping Kubeconfig:Stopping Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:39.466903   36175 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:27:39.467187   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.467221   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.481893   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37061
	I0717 00:27:39.482693   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.483258   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.483291   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.483675   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.483849   36175 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:27:39.485338   36175 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:27:39.485354   36175 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:39.485630   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.485665   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.500279   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33407
	I0717 00:27:39.500641   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.501091   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.501112   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.501467   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.501656   36175 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:27:39.504222   36175 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:39.504601   36175 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:39.504621   36175 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:39.504748   36175 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:39.505045   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.505092   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.521321   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0717 00:27:39.521714   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.522174   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.522198   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.522524   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.522718   36175 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:27:39.522899   36175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:39.522919   36175 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:27:39.525896   36175 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:39.526320   36175 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:39.526344   36175 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:39.526506   36175 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:27:39.526676   36175 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:27:39.526814   36175 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:27:39.526927   36175 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:27:39.617142   36175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:39.631379   36175 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:39.631409   36175 api_server.go:166] Checking apiserver status ...
	I0717 00:27:39.631448   36175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:39.644707   36175 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:27:39.653898   36175 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:39.653940   36175 ssh_runner.go:195] Run: ls
	I0717 00:27:39.658522   36175 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:39.662802   36175 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:39.662823   36175 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:27:39.662834   36175 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:39.662854   36175 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:27:39.663261   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.663301   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.678045   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0717 00:27:39.678473   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.678985   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.679004   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.679307   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.679500   36175 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:27:39.681266   36175 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:27:39.681285   36175 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:39.681567   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.681604   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.695460   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I0717 00:27:39.695950   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.696446   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.696460   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.696790   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.696965   36175 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:27:39.699710   36175 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:39.700177   36175 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:39.700201   36175 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:39.700360   36175 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:39.700668   36175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:39.700699   36175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:39.715246   36175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0717 00:27:39.715678   36175 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:39.716203   36175 main.go:141] libmachine: Using API Version  1
	I0717 00:27:39.716230   36175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:39.716626   36175 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:39.716800   36175 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:27:39.716982   36175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:39.717004   36175 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:27:39.720172   36175 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:39.720681   36175 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:39.720701   36175 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:39.720960   36175 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:27:39.721102   36175 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:27:39.721235   36175 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:27:39.721327   36175 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:27:39.804224   36175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:39.819091   36175 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 7 (640.962342ms)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:27:49.941023   36306 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:27:49.941144   36306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:49.941154   36306 out.go:304] Setting ErrFile to fd 2...
	I0717 00:27:49.941158   36306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:49.941325   36306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:27:49.941481   36306 out.go:298] Setting JSON to false
	I0717 00:27:49.941510   36306 mustload.go:65] Loading cluster: ha-565881
	I0717 00:27:49.941642   36306 notify.go:220] Checking for updates...
	I0717 00:27:49.942057   36306 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:27:49.942078   36306 status.go:255] checking status of ha-565881 ...
	I0717 00:27:49.942616   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:49.942666   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:49.957932   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0717 00:27:49.958358   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:49.959007   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:49.959033   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:49.959373   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:49.959583   36306 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:27:49.961227   36306 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:27:49.961254   36306 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:49.961571   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:49.961607   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:49.975508   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0717 00:27:49.975874   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:49.976316   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:49.976335   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:49.976705   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:49.976892   36306 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:27:49.979831   36306 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:49.980243   36306 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:49.980271   36306 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:49.980436   36306 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:49.980759   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:49.980800   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:49.994824   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0717 00:27:49.995237   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:49.995660   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:49.995680   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:49.995941   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:49.996149   36306 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:27:49.996325   36306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:49.996362   36306 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:27:49.998928   36306 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:49.999324   36306 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:49.999343   36306 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:49.999504   36306 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:27:49.999902   36306 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:27:50.000044   36306 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:27:50.000185   36306 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:27:50.084798   36306 ssh_runner.go:195] Run: systemctl --version
	I0717 00:27:50.091987   36306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:50.109287   36306 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:50.109319   36306 api_server.go:166] Checking apiserver status ...
	I0717 00:27:50.109353   36306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:50.132482   36306 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:27:50.144009   36306 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:50.144066   36306 ssh_runner.go:195] Run: ls
	I0717 00:27:50.148773   36306 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:50.155755   36306 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:50.155786   36306 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:27:50.155795   36306 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:50.155812   36306 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:27:50.156164   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:50.156201   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:50.170992   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45667
	I0717 00:27:50.171493   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:50.172069   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:50.172094   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:50.172475   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:50.172670   36306 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:27:50.174319   36306 status.go:330] ha-565881-m02 host status = "Stopped" (err=<nil>)
	I0717 00:27:50.174334   36306 status.go:343] host is not running, skipping remaining checks
	I0717 00:27:50.174339   36306 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:50.174353   36306 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:27:50.174633   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:50.174670   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:50.189439   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
	I0717 00:27:50.189810   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:50.190296   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:50.190317   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:50.190648   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:50.190822   36306 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:27:50.192388   36306 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:27:50.192402   36306 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:50.192722   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:50.192761   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:50.209073   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I0717 00:27:50.209560   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:50.210021   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:50.210041   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:50.210384   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:50.210589   36306 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:27:50.213698   36306 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:50.214228   36306 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:50.214258   36306 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:50.214387   36306 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:50.214685   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:50.214737   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:50.229239   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0717 00:27:50.229737   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:50.230216   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:50.230252   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:50.230637   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:50.230858   36306 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:27:50.231042   36306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:50.231075   36306 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:27:50.233873   36306 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:50.234264   36306 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:50.234285   36306 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:50.234439   36306 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:27:50.234604   36306 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:27:50.234771   36306 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:27:50.234912   36306 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:27:50.320349   36306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:50.339117   36306 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:50.339156   36306 api_server.go:166] Checking apiserver status ...
	I0717 00:27:50.339210   36306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:50.355561   36306 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:27:50.366750   36306 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:50.366799   36306 ssh_runner.go:195] Run: ls
	I0717 00:27:50.372404   36306 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:50.376891   36306 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:50.376916   36306 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:27:50.376924   36306 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:50.376944   36306 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:27:50.377319   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:50.377359   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:50.392280   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0717 00:27:50.392702   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:50.393198   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:50.393225   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:50.393556   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:50.393733   36306 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:27:50.395200   36306 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:27:50.395217   36306 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:50.395499   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:50.395542   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:50.410355   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0717 00:27:50.410720   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:50.411208   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:50.411229   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:50.411535   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:50.411712   36306 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:27:50.414557   36306 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:50.414938   36306 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:50.414969   36306 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:50.415109   36306 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:50.415423   36306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:50.415465   36306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:50.430717   36306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0717 00:27:50.431102   36306 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:50.431620   36306 main.go:141] libmachine: Using API Version  1
	I0717 00:27:50.431641   36306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:50.431960   36306 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:50.432154   36306 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:27:50.432327   36306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:50.432343   36306 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:27:50.435406   36306 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:50.435981   36306 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:50.436021   36306 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:50.436159   36306 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:27:50.436328   36306 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:27:50.436503   36306 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:27:50.436646   36306 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:27:50.524461   36306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:50.539295   36306 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 7 (618.042749ms)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-565881-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:27:58.159889   36394 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:27:58.160167   36394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:58.160178   36394 out.go:304] Setting ErrFile to fd 2...
	I0717 00:27:58.160182   36394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:27:58.160358   36394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:27:58.160522   36394 out.go:298] Setting JSON to false
	I0717 00:27:58.160551   36394 mustload.go:65] Loading cluster: ha-565881
	I0717 00:27:58.160605   36394 notify.go:220] Checking for updates...
	I0717 00:27:58.161079   36394 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:27:58.161101   36394 status.go:255] checking status of ha-565881 ...
	I0717 00:27:58.161576   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.161619   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.180938   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42965
	I0717 00:27:58.181312   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.181903   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.181929   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.182235   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.182372   36394 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:27:58.184095   36394 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:27:58.184113   36394 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:58.184501   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.184542   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.199755   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0717 00:27:58.200192   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.200692   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.200711   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.201043   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.201246   36394 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:27:58.203831   36394 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:58.204217   36394 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:58.204245   36394 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:58.204388   36394 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:27:58.204830   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.204883   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.219175   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35457
	I0717 00:27:58.219665   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.220275   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.220296   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.220653   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.220866   36394 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:27:58.221228   36394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:58.221267   36394 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:27:58.223917   36394 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:58.224324   36394 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:27:58.224360   36394 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:27:58.224479   36394 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:27:58.224700   36394 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:27:58.224859   36394 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:27:58.225001   36394 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:27:58.308377   36394 ssh_runner.go:195] Run: systemctl --version
	I0717 00:27:58.314300   36394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:58.328614   36394 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:58.328639   36394 api_server.go:166] Checking apiserver status ...
	I0717 00:27:58.328679   36394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:58.344632   36394 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 00:27:58.356957   36394 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:58.357014   36394 ssh_runner.go:195] Run: ls
	I0717 00:27:58.361111   36394 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:58.365339   36394 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:58.365357   36394 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:27:58.365366   36394 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:58.365386   36394 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:27:58.365660   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.365688   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.380999   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0717 00:27:58.381381   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.381799   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.381816   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.382109   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.382289   36394 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:27:58.383714   36394 status.go:330] ha-565881-m02 host status = "Stopped" (err=<nil>)
	I0717 00:27:58.383726   36394 status.go:343] host is not running, skipping remaining checks
	I0717 00:27:58.383732   36394 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:58.383746   36394 status.go:255] checking status of ha-565881-m03 ...
	I0717 00:27:58.384109   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.384148   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.398241   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0717 00:27:58.398681   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.399093   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.399113   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.399420   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.399611   36394 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:27:58.401042   36394 status.go:330] ha-565881-m03 host status = "Running" (err=<nil>)
	I0717 00:27:58.401064   36394 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:58.401333   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.401369   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.415051   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I0717 00:27:58.415393   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.415798   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.415821   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.416123   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.416349   36394 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:27:58.419538   36394 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:58.420064   36394 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:58.420106   36394 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:58.420232   36394 host.go:66] Checking if "ha-565881-m03" exists ...
	I0717 00:27:58.420500   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.420534   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.434641   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I0717 00:27:58.435013   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.435392   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.435410   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.435730   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.435910   36394 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:27:58.436125   36394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:58.436141   36394 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:27:58.438512   36394 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:58.438913   36394 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:27:58.438949   36394 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:27:58.439003   36394 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:27:58.439167   36394 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:27:58.439314   36394 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:27:58.439452   36394 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:27:58.525093   36394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:58.543748   36394 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:27:58.543774   36394 api_server.go:166] Checking apiserver status ...
	I0717 00:27:58.543804   36394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:58.559841   36394 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0717 00:27:58.570955   36394 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:27:58.571028   36394 ssh_runner.go:195] Run: ls
	I0717 00:27:58.575301   36394 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:27:58.579688   36394 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:27:58.579710   36394 status.go:422] ha-565881-m03 apiserver status = Running (err=<nil>)
	I0717 00:27:58.579720   36394 status.go:257] ha-565881-m03 status: &{Name:ha-565881-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:27:58.579738   36394 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:27:58.580028   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.580075   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.595027   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40145
	I0717 00:27:58.595399   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.595793   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.595810   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.596151   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.596356   36394 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:27:58.597842   36394 status.go:330] ha-565881-m04 host status = "Running" (err=<nil>)
	I0717 00:27:58.597856   36394 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:58.598141   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.598172   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.613666   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45757
	I0717 00:27:58.614088   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.614576   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.614596   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.614909   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.615089   36394 main.go:141] libmachine: (ha-565881-m04) Calling .GetIP
	I0717 00:27:58.617961   36394 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:58.618477   36394 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:58.618513   36394 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:58.618658   36394 host.go:66] Checking if "ha-565881-m04" exists ...
	I0717 00:27:58.619017   36394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:27:58.619050   36394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:27:58.634322   36394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34685
	I0717 00:27:58.634783   36394 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:27:58.635293   36394 main.go:141] libmachine: Using API Version  1
	I0717 00:27:58.635316   36394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:27:58.635683   36394 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:27:58.635881   36394 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:27:58.636064   36394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:27:58.636081   36394 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:27:58.638626   36394 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:58.639012   36394 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:27:58.639040   36394 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:27:58.639101   36394 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:27:58.639295   36394 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:27:58.639443   36394 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:27:58.639633   36394 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:27:58.720240   36394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:58.734961   36394 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565881 -n ha-565881
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565881 logs -n 25: (1.479136478s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881:/home/docker/cp-test_ha-565881-m03_ha-565881.txt                      |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881 sudo cat                                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881.txt                                |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m02:/home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m04 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp testdata/cp-test.txt                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881:/home/docker/cp-test_ha-565881-m04_ha-565881.txt                      |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881 sudo cat                                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881.txt                                |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m02:/home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03:/home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m03 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-565881 node stop m02 -v=7                                                    | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-565881 node start m02 -v=7                                                   | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:27 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:19:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:19:58.740650   30817 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:19:58.740769   30817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:19:58.740779   30817 out.go:304] Setting ErrFile to fd 2...
	I0717 00:19:58.740786   30817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:19:58.740972   30817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:19:58.741512   30817 out.go:298] Setting JSON to false
	I0717 00:19:58.742317   30817 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3748,"bootTime":1721171851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:19:58.742373   30817 start.go:139] virtualization: kvm guest
	I0717 00:19:58.744467   30817 out.go:177] * [ha-565881] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:19:58.745816   30817 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:19:58.745875   30817 notify.go:220] Checking for updates...
	I0717 00:19:58.748121   30817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:19:58.749407   30817 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:19:58.750607   30817 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:19:58.751754   30817 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:19:58.752866   30817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:19:58.754143   30817 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:19:58.787281   30817 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 00:19:58.788393   30817 start.go:297] selected driver: kvm2
	I0717 00:19:58.788410   30817 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:19:58.788423   30817 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:19:58.789142   30817 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:19:58.789222   30817 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:19:58.803958   30817 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:19:58.804000   30817 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:19:58.804221   30817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:19:58.804268   30817 cni.go:84] Creating CNI manager for ""
	I0717 00:19:58.804280   30817 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 00:19:58.804285   30817 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:19:58.804349   30817 start.go:340] cluster config:
	{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0717 00:19:58.804438   30817 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:19:58.805891   30817 out.go:177] * Starting "ha-565881" primary control-plane node in "ha-565881" cluster
	I0717 00:19:58.806911   30817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:19:58.806940   30817 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:19:58.806946   30817 cache.go:56] Caching tarball of preloaded images
	I0717 00:19:58.807007   30817 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:19:58.807016   30817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:19:58.807294   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:19:58.807314   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json: {Name:mk0bce3779ec18ce7d646e20c895f513860f7b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:19:58.807428   30817 start.go:360] acquireMachinesLock for ha-565881: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:19:58.807453   30817 start.go:364] duration metric: took 14.072µs to acquireMachinesLock for "ha-565881"
	I0717 00:19:58.807468   30817 start.go:93] Provisioning new machine with config: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:19:58.807517   30817 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 00:19:58.808930   30817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:19:58.809055   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:19:58.809092   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:19:58.822695   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0717 00:19:58.823149   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:19:58.823696   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:19:58.823715   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:19:58.824046   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:19:58.824222   30817 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:19:58.824434   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:19:58.824615   30817 start.go:159] libmachine.API.Create for "ha-565881" (driver="kvm2")
	I0717 00:19:58.824639   30817 client.go:168] LocalClient.Create starting
	I0717 00:19:58.824664   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 00:19:58.824695   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:19:58.824712   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:19:58.824761   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 00:19:58.824778   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:19:58.824788   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:19:58.824804   30817 main.go:141] libmachine: Running pre-create checks...
	I0717 00:19:58.824816   30817 main.go:141] libmachine: (ha-565881) Calling .PreCreateCheck
	I0717 00:19:58.825177   30817 main.go:141] libmachine: (ha-565881) Calling .GetConfigRaw
	I0717 00:19:58.825686   30817 main.go:141] libmachine: Creating machine...
	I0717 00:19:58.825700   30817 main.go:141] libmachine: (ha-565881) Calling .Create
	I0717 00:19:58.825859   30817 main.go:141] libmachine: (ha-565881) Creating KVM machine...
	I0717 00:19:58.827115   30817 main.go:141] libmachine: (ha-565881) DBG | found existing default KVM network
	I0717 00:19:58.827768   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:58.827647   30840 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045c0}
	I0717 00:19:58.827851   30817 main.go:141] libmachine: (ha-565881) DBG | created network xml: 
	I0717 00:19:58.827872   30817 main.go:141] libmachine: (ha-565881) DBG | <network>
	I0717 00:19:58.827883   30817 main.go:141] libmachine: (ha-565881) DBG |   <name>mk-ha-565881</name>
	I0717 00:19:58.827894   30817 main.go:141] libmachine: (ha-565881) DBG |   <dns enable='no'/>
	I0717 00:19:58.827905   30817 main.go:141] libmachine: (ha-565881) DBG |   
	I0717 00:19:58.827918   30817 main.go:141] libmachine: (ha-565881) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 00:19:58.827930   30817 main.go:141] libmachine: (ha-565881) DBG |     <dhcp>
	I0717 00:19:58.827942   30817 main.go:141] libmachine: (ha-565881) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 00:19:58.827954   30817 main.go:141] libmachine: (ha-565881) DBG |     </dhcp>
	I0717 00:19:58.827964   30817 main.go:141] libmachine: (ha-565881) DBG |   </ip>
	I0717 00:19:58.827971   30817 main.go:141] libmachine: (ha-565881) DBG |   
	I0717 00:19:58.827978   30817 main.go:141] libmachine: (ha-565881) DBG | </network>
	I0717 00:19:58.827985   30817 main.go:141] libmachine: (ha-565881) DBG | 
	I0717 00:19:58.832646   30817 main.go:141] libmachine: (ha-565881) DBG | trying to create private KVM network mk-ha-565881 192.168.39.0/24...
	I0717 00:19:58.895480   30817 main.go:141] libmachine: (ha-565881) DBG | private KVM network mk-ha-565881 192.168.39.0/24 created
	I0717 00:19:58.895511   30817 main.go:141] libmachine: (ha-565881) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881 ...
	I0717 00:19:58.895523   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:58.895474   30840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:19:58.895540   30817 main.go:141] libmachine: (ha-565881) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:19:58.895747   30817 main.go:141] libmachine: (ha-565881) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 00:19:59.131408   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:59.131300   30840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa...
	I0717 00:19:59.246760   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:59.246623   30840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/ha-565881.rawdisk...
	I0717 00:19:59.246795   30817 main.go:141] libmachine: (ha-565881) DBG | Writing magic tar header
	I0717 00:19:59.246806   30817 main.go:141] libmachine: (ha-565881) DBG | Writing SSH key tar header
	I0717 00:19:59.246814   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:19:59.246733   30840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881 ...
	I0717 00:19:59.246850   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881
	I0717 00:19:59.246873   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881 (perms=drwx------)
	I0717 00:19:59.246884   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 00:19:59.246895   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:19:59.246905   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 00:19:59.246912   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:19:59.246922   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:19:59.246933   30817 main.go:141] libmachine: (ha-565881) DBG | Checking permissions on dir: /home
	I0717 00:19:59.246952   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:19:59.246963   30817 main.go:141] libmachine: (ha-565881) DBG | Skipping /home - not owner
	I0717 00:19:59.246979   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 00:19:59.246988   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 00:19:59.246997   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:19:59.247007   30817 main.go:141] libmachine: (ha-565881) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:19:59.247021   30817 main.go:141] libmachine: (ha-565881) Creating domain...
	I0717 00:19:59.248198   30817 main.go:141] libmachine: (ha-565881) define libvirt domain using xml: 
	I0717 00:19:59.248216   30817 main.go:141] libmachine: (ha-565881) <domain type='kvm'>
	I0717 00:19:59.248222   30817 main.go:141] libmachine: (ha-565881)   <name>ha-565881</name>
	I0717 00:19:59.248227   30817 main.go:141] libmachine: (ha-565881)   <memory unit='MiB'>2200</memory>
	I0717 00:19:59.248232   30817 main.go:141] libmachine: (ha-565881)   <vcpu>2</vcpu>
	I0717 00:19:59.248238   30817 main.go:141] libmachine: (ha-565881)   <features>
	I0717 00:19:59.248244   30817 main.go:141] libmachine: (ha-565881)     <acpi/>
	I0717 00:19:59.248252   30817 main.go:141] libmachine: (ha-565881)     <apic/>
	I0717 00:19:59.248256   30817 main.go:141] libmachine: (ha-565881)     <pae/>
	I0717 00:19:59.248264   30817 main.go:141] libmachine: (ha-565881)     
	I0717 00:19:59.248280   30817 main.go:141] libmachine: (ha-565881)   </features>
	I0717 00:19:59.248284   30817 main.go:141] libmachine: (ha-565881)   <cpu mode='host-passthrough'>
	I0717 00:19:59.248289   30817 main.go:141] libmachine: (ha-565881)   
	I0717 00:19:59.248293   30817 main.go:141] libmachine: (ha-565881)   </cpu>
	I0717 00:19:59.248298   30817 main.go:141] libmachine: (ha-565881)   <os>
	I0717 00:19:59.248305   30817 main.go:141] libmachine: (ha-565881)     <type>hvm</type>
	I0717 00:19:59.248311   30817 main.go:141] libmachine: (ha-565881)     <boot dev='cdrom'/>
	I0717 00:19:59.248322   30817 main.go:141] libmachine: (ha-565881)     <boot dev='hd'/>
	I0717 00:19:59.248334   30817 main.go:141] libmachine: (ha-565881)     <bootmenu enable='no'/>
	I0717 00:19:59.248343   30817 main.go:141] libmachine: (ha-565881)   </os>
	I0717 00:19:59.248354   30817 main.go:141] libmachine: (ha-565881)   <devices>
	I0717 00:19:59.248375   30817 main.go:141] libmachine: (ha-565881)     <disk type='file' device='cdrom'>
	I0717 00:19:59.248419   30817 main.go:141] libmachine: (ha-565881)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/boot2docker.iso'/>
	I0717 00:19:59.248441   30817 main.go:141] libmachine: (ha-565881)       <target dev='hdc' bus='scsi'/>
	I0717 00:19:59.248449   30817 main.go:141] libmachine: (ha-565881)       <readonly/>
	I0717 00:19:59.248457   30817 main.go:141] libmachine: (ha-565881)     </disk>
	I0717 00:19:59.248463   30817 main.go:141] libmachine: (ha-565881)     <disk type='file' device='disk'>
	I0717 00:19:59.248471   30817 main.go:141] libmachine: (ha-565881)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:19:59.248480   30817 main.go:141] libmachine: (ha-565881)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/ha-565881.rawdisk'/>
	I0717 00:19:59.248487   30817 main.go:141] libmachine: (ha-565881)       <target dev='hda' bus='virtio'/>
	I0717 00:19:59.248495   30817 main.go:141] libmachine: (ha-565881)     </disk>
	I0717 00:19:59.248500   30817 main.go:141] libmachine: (ha-565881)     <interface type='network'>
	I0717 00:19:59.248508   30817 main.go:141] libmachine: (ha-565881)       <source network='mk-ha-565881'/>
	I0717 00:19:59.248512   30817 main.go:141] libmachine: (ha-565881)       <model type='virtio'/>
	I0717 00:19:59.248537   30817 main.go:141] libmachine: (ha-565881)     </interface>
	I0717 00:19:59.248576   30817 main.go:141] libmachine: (ha-565881)     <interface type='network'>
	I0717 00:19:59.248591   30817 main.go:141] libmachine: (ha-565881)       <source network='default'/>
	I0717 00:19:59.248601   30817 main.go:141] libmachine: (ha-565881)       <model type='virtio'/>
	I0717 00:19:59.248612   30817 main.go:141] libmachine: (ha-565881)     </interface>
	I0717 00:19:59.248622   30817 main.go:141] libmachine: (ha-565881)     <serial type='pty'>
	I0717 00:19:59.248635   30817 main.go:141] libmachine: (ha-565881)       <target port='0'/>
	I0717 00:19:59.248647   30817 main.go:141] libmachine: (ha-565881)     </serial>
	I0717 00:19:59.248665   30817 main.go:141] libmachine: (ha-565881)     <console type='pty'>
	I0717 00:19:59.248683   30817 main.go:141] libmachine: (ha-565881)       <target type='serial' port='0'/>
	I0717 00:19:59.248699   30817 main.go:141] libmachine: (ha-565881)     </console>
	I0717 00:19:59.248710   30817 main.go:141] libmachine: (ha-565881)     <rng model='virtio'>
	I0717 00:19:59.248722   30817 main.go:141] libmachine: (ha-565881)       <backend model='random'>/dev/random</backend>
	I0717 00:19:59.248732   30817 main.go:141] libmachine: (ha-565881)     </rng>
	I0717 00:19:59.248740   30817 main.go:141] libmachine: (ha-565881)     
	I0717 00:19:59.248744   30817 main.go:141] libmachine: (ha-565881)     
	I0717 00:19:59.248754   30817 main.go:141] libmachine: (ha-565881)   </devices>
	I0717 00:19:59.248765   30817 main.go:141] libmachine: (ha-565881) </domain>
	I0717 00:19:59.248776   30817 main.go:141] libmachine: (ha-565881) 
	I0717 00:19:59.252949   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:04:20:e4 in network default
	I0717 00:19:59.253428   30817 main.go:141] libmachine: (ha-565881) Ensuring networks are active...
	I0717 00:19:59.253444   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:19:59.254035   30817 main.go:141] libmachine: (ha-565881) Ensuring network default is active
	I0717 00:19:59.254266   30817 main.go:141] libmachine: (ha-565881) Ensuring network mk-ha-565881 is active
	I0717 00:19:59.254684   30817 main.go:141] libmachine: (ha-565881) Getting domain xml...
	I0717 00:19:59.255485   30817 main.go:141] libmachine: (ha-565881) Creating domain...
	I0717 00:20:00.439716   30817 main.go:141] libmachine: (ha-565881) Waiting to get IP...
	I0717 00:20:00.440504   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:00.440831   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:00.440857   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:00.440813   30840 retry.go:31] will retry after 279.96745ms: waiting for machine to come up
	I0717 00:20:00.722294   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:00.722799   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:00.722825   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:00.722742   30840 retry.go:31] will retry after 319.661574ms: waiting for machine to come up
	I0717 00:20:01.045618   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:01.046162   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:01.046190   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:01.046102   30840 retry.go:31] will retry after 366.795432ms: waiting for machine to come up
	I0717 00:20:01.414622   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:01.415055   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:01.415078   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:01.415021   30840 retry.go:31] will retry after 561.296643ms: waiting for machine to come up
	I0717 00:20:01.977961   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:01.978449   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:01.978477   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:01.978405   30840 retry.go:31] will retry after 517.966337ms: waiting for machine to come up
	I0717 00:20:02.498132   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:02.498673   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:02.498694   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:02.498647   30840 retry.go:31] will retry after 609.470693ms: waiting for machine to come up
	I0717 00:20:03.109589   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:03.109946   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:03.109980   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:03.109917   30840 retry.go:31] will retry after 917.846378ms: waiting for machine to come up
	I0717 00:20:04.029475   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:04.029926   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:04.029962   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:04.029889   30840 retry.go:31] will retry after 992.674633ms: waiting for machine to come up
	I0717 00:20:05.023753   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:05.024260   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:05.024286   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:05.024220   30840 retry.go:31] will retry after 1.465280494s: waiting for machine to come up
	I0717 00:20:06.492017   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:06.492366   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:06.492397   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:06.492330   30840 retry.go:31] will retry after 2.258281771s: waiting for machine to come up
	I0717 00:20:08.751788   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:08.752306   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:08.752330   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:08.752260   30840 retry.go:31] will retry after 1.924347004s: waiting for machine to come up
	I0717 00:20:10.678814   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:10.679150   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:10.679183   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:10.679106   30840 retry.go:31] will retry after 3.289331366s: waiting for machine to come up
	I0717 00:20:13.970143   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:13.970436   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:13.970476   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:13.970410   30840 retry.go:31] will retry after 2.743570764s: waiting for machine to come up
	I0717 00:20:16.717289   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:16.717628   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find current IP address of domain ha-565881 in network mk-ha-565881
	I0717 00:20:16.717673   30817 main.go:141] libmachine: (ha-565881) DBG | I0717 00:20:16.717595   30840 retry.go:31] will retry after 4.080092625s: waiting for machine to come up
	I0717 00:20:20.800532   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:20.800958   30817 main.go:141] libmachine: (ha-565881) Found IP for machine: 192.168.39.238
	I0717 00:20:20.800985   30817 main.go:141] libmachine: (ha-565881) Reserving static IP address...
	I0717 00:20:20.800998   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has current primary IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:20.801368   30817 main.go:141] libmachine: (ha-565881) DBG | unable to find host DHCP lease matching {name: "ha-565881", mac: "52:54:00:ff:f7:b6", ip: "192.168.39.238"} in network mk-ha-565881
	I0717 00:20:20.872223   30817 main.go:141] libmachine: (ha-565881) DBG | Getting to WaitForSSH function...
	I0717 00:20:20.872252   30817 main.go:141] libmachine: (ha-565881) Reserved static IP address: 192.168.39.238
	I0717 00:20:20.872264   30817 main.go:141] libmachine: (ha-565881) Waiting for SSH to be available...
	I0717 00:20:20.874531   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:20.874938   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:20.874970   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:20.875077   30817 main.go:141] libmachine: (ha-565881) DBG | Using SSH client type: external
	I0717 00:20:20.875100   30817 main.go:141] libmachine: (ha-565881) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa (-rw-------)
	I0717 00:20:20.875123   30817 main.go:141] libmachine: (ha-565881) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:20:20.875148   30817 main.go:141] libmachine: (ha-565881) DBG | About to run SSH command:
	I0717 00:20:20.875162   30817 main.go:141] libmachine: (ha-565881) DBG | exit 0
	I0717 00:20:21.000487   30817 main.go:141] libmachine: (ha-565881) DBG | SSH cmd err, output: <nil>: 
	I0717 00:20:21.000762   30817 main.go:141] libmachine: (ha-565881) KVM machine creation complete!
	I0717 00:20:21.001109   30817 main.go:141] libmachine: (ha-565881) Calling .GetConfigRaw
	I0717 00:20:21.001770   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:21.002024   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:21.002265   30817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:20:21.002283   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:20:21.003590   30817 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:20:21.003604   30817 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:20:21.003610   30817 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:20:21.003616   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.005956   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.006317   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.006348   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.006423   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.006583   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.006719   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.006873   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.007037   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.007214   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.007226   30817 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:20:21.120128   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:20:21.120163   30817 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:20:21.120175   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.122946   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.123327   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.123350   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.123498   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.123697   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.123845   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.124005   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.124188   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.124354   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.124364   30817 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:20:21.237438   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:20:21.237529   30817 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:20:21.237543   30817 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:20:21.237555   30817 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:20:21.237795   30817 buildroot.go:166] provisioning hostname "ha-565881"
	I0717 00:20:21.237818   30817 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:20:21.238018   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.240425   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.240735   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.240759   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.240925   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.241079   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.241237   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.241337   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.241577   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.241741   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.241755   30817 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881 && echo "ha-565881" | sudo tee /etc/hostname
	I0717 00:20:21.366943   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:20:21.366981   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.369796   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.370176   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.370206   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.370413   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.370613   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.370779   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.370935   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.371087   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.371400   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.371436   30817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:20:21.489980   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:20:21.490007   30817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:20:21.490029   30817 buildroot.go:174] setting up certificates
	I0717 00:20:21.490040   30817 provision.go:84] configureAuth start
	I0717 00:20:21.490051   30817 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:20:21.490431   30817 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:20:21.493171   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.493531   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.493554   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.493744   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.496311   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.496694   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.496717   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.496865   30817 provision.go:143] copyHostCerts
	I0717 00:20:21.496893   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:20:21.496969   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:20:21.496980   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:20:21.497076   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:20:21.497217   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:20:21.497247   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:20:21.497258   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:20:21.497303   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:20:21.497382   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:20:21.497405   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:20:21.497414   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:20:21.497450   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:20:21.497525   30817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881 san=[127.0.0.1 192.168.39.238 ha-565881 localhost minikube]
	I0717 00:20:21.619638   30817 provision.go:177] copyRemoteCerts
	I0717 00:20:21.619692   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:20:21.619715   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.622265   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.622627   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.622660   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.622817   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.623029   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.623195   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.623349   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:21.707053   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:20:21.707136   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:20:21.731617   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:20:21.731688   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:20:21.756115   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:20:21.756182   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:20:21.779347   30817 provision.go:87] duration metric: took 289.296091ms to configureAuth
	I0717 00:20:21.779370   30817 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:20:21.779548   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:20:21.779614   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:21.782086   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.782387   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:21.782424   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:21.782566   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:21.782786   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.782972   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:21.783125   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:21.783259   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:21.783429   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:21.783451   30817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:20:22.065032   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:20:22.065059   30817 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:20:22.065079   30817 main.go:141] libmachine: (ha-565881) Calling .GetURL
	I0717 00:20:22.066557   30817 main.go:141] libmachine: (ha-565881) DBG | Using libvirt version 6000000
	I0717 00:20:22.068726   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.069010   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.069039   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.069173   30817 main.go:141] libmachine: Docker is up and running!
	I0717 00:20:22.069190   30817 main.go:141] libmachine: Reticulating splines...
	I0717 00:20:22.069197   30817 client.go:171] duration metric: took 23.244551778s to LocalClient.Create
	I0717 00:20:22.069221   30817 start.go:167] duration metric: took 23.244608294s to libmachine.API.Create "ha-565881"
	I0717 00:20:22.069232   30817 start.go:293] postStartSetup for "ha-565881" (driver="kvm2")
	I0717 00:20:22.069241   30817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:20:22.069270   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.069550   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:20:22.069572   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:22.071733   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.071977   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.072000   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.072161   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:22.072350   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.072519   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:22.072687   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:22.159752   30817 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:20:22.163990   30817 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:20:22.164010   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:20:22.164064   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:20:22.164149   30817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:20:22.164156   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:20:22.164247   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:20:22.173941   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:20:22.197521   30817 start.go:296] duration metric: took 128.276491ms for postStartSetup
	I0717 00:20:22.197568   30817 main.go:141] libmachine: (ha-565881) Calling .GetConfigRaw
	I0717 00:20:22.198123   30817 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:20:22.200694   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.200990   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.201021   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.201240   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:20:22.201442   30817 start.go:128] duration metric: took 23.39391468s to createHost
	I0717 00:20:22.201463   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:22.203691   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.204165   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.204185   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.204226   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:22.204417   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.204598   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.204709   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:22.204884   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:20:22.205047   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:20:22.205077   30817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:20:22.317174   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721175622.292495790
	
	I0717 00:20:22.317197   30817 fix.go:216] guest clock: 1721175622.292495790
	I0717 00:20:22.317206   30817 fix.go:229] Guest: 2024-07-17 00:20:22.29249579 +0000 UTC Remote: 2024-07-17 00:20:22.201454346 +0000 UTC m=+23.494146658 (delta=91.041444ms)
	I0717 00:20:22.317247   30817 fix.go:200] guest clock delta is within tolerance: 91.041444ms
	I0717 00:20:22.317254   30817 start.go:83] releasing machines lock for "ha-565881", held for 23.509792724s
	I0717 00:20:22.317280   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.317560   30817 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:20:22.320411   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.320783   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.320828   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.320988   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.321419   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.321564   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:22.321641   30817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:20:22.321688   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:22.321746   30817 ssh_runner.go:195] Run: cat /version.json
	I0717 00:20:22.321769   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:22.323939   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.324287   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.324313   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.324338   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.324467   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:22.324659   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.324744   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:22.324770   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:22.324794   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:22.324884   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:22.324955   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:22.325040   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:22.325209   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:22.325333   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:22.426318   30817 ssh_runner.go:195] Run: systemctl --version
	I0717 00:20:22.432459   30817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:20:22.588650   30817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:20:22.595562   30817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:20:22.595625   30817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:20:22.611569   30817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:20:22.611594   30817 start.go:495] detecting cgroup driver to use...
	I0717 00:20:22.611664   30817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:20:22.628915   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:20:22.642572   30817 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:20:22.642621   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:20:22.655563   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:20:22.668534   30817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:20:22.778146   30817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:20:22.943243   30817 docker.go:233] disabling docker service ...
	I0717 00:20:22.943313   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:20:22.965917   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:20:22.978471   30817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:20:23.098504   30817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:20:23.206963   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:20:23.220162   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:20:23.238772   30817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:20:23.238852   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.249269   30817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:20:23.249332   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.259773   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.269559   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.279583   30817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:20:23.289368   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.299052   30817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.316677   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:20:23.327278   30817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:20:23.336270   30817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:20:23.336328   30817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:20:23.349361   30817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:20:23.358454   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:20:23.470512   30817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:20:23.607031   30817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:20:23.607093   30817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:20:23.612093   30817 start.go:563] Will wait 60s for crictl version
	I0717 00:20:23.612169   30817 ssh_runner.go:195] Run: which crictl
	I0717 00:20:23.615996   30817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:20:23.653503   30817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:20:23.653582   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:20:23.680930   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:20:23.711136   30817 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:20:23.712435   30817 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:20:23.715351   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:23.715819   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:23.715842   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:23.716106   30817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:20:23.720427   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:20:23.734557   30817 kubeadm.go:883] updating cluster {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:20:23.734686   30817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:20:23.734747   30817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:20:23.767825   30817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 00:20:23.767888   30817 ssh_runner.go:195] Run: which lz4
	I0717 00:20:23.771651   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 00:20:23.771733   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 00:20:23.775721   30817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 00:20:23.775742   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 00:20:25.129825   30817 crio.go:462] duration metric: took 1.358114684s to copy over tarball
	I0717 00:20:25.129913   30817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 00:20:27.228863   30817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.098922659s)
	I0717 00:20:27.228889   30817 crio.go:469] duration metric: took 2.099034446s to extract the tarball
	I0717 00:20:27.228898   30817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 00:20:27.267708   30817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:20:27.314764   30817 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:20:27.314788   30817 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:20:27.314796   30817 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.2 crio true true} ...
	I0717 00:20:27.314905   30817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:20:27.314968   30817 ssh_runner.go:195] Run: crio config
	I0717 00:20:27.358528   30817 cni.go:84] Creating CNI manager for ""
	I0717 00:20:27.358555   30817 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 00:20:27.358566   30817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:20:27.358588   30817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565881 NodeName:ha-565881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:20:27.358720   30817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565881"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:20:27.358741   30817 kube-vip.go:115] generating kube-vip config ...
	I0717 00:20:27.358783   30817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:20:27.375274   30817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:20:27.375387   30817 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:20:27.375441   30817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:20:27.385362   30817 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:20:27.385428   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:20:27.394951   30817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 00:20:27.411402   30817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:20:27.428532   30817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 00:20:27.444909   30817 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0717 00:20:27.460904   30817 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:20:27.464763   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:20:27.477002   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:20:27.607936   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:20:27.626049   30817 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.238
	I0717 00:20:27.626074   30817 certs.go:194] generating shared ca certs ...
	I0717 00:20:27.626093   30817 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:27.626252   30817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:20:27.626306   30817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:20:27.626319   30817 certs.go:256] generating profile certs ...
	I0717 00:20:27.626422   30817 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:20:27.626453   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt with IP's: []
	I0717 00:20:27.920724   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt ...
	I0717 00:20:27.920749   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt: {Name:mk5d1137087700efa0f3abecf8f2e2e63a2bbf92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:27.920907   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key ...
	I0717 00:20:27.920918   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key: {Name:mk637fa6caecf24ee3b93c51fdb89fafa5939ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:27.920988   30817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.61cc86ec
	I0717 00:20:27.921001   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.61cc86ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.254]
	I0717 00:20:28.103272   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.61cc86ec ...
	I0717 00:20:28.103300   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.61cc86ec: {Name:mk579d14b971844df09f8ab5aeaf81190afa9f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:28.103452   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.61cc86ec ...
	I0717 00:20:28.103464   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.61cc86ec: {Name:mk76b1ccb949508d4fd35d54e3f9bf659d7656aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:28.103528   30817 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.61cc86ec -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:20:28.103619   30817 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.61cc86ec -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:20:28.103683   30817 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:20:28.103697   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt with IP's: []
	I0717 00:20:28.212939   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt ...
	I0717 00:20:28.212964   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt: {Name:mk0c4fe949694602f58bd41c63de8ede692cca0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:28.213106   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key ...
	I0717 00:20:28.213116   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key: {Name:mk99c0650071c42da3360e314f055c42b03db4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:28.213231   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:20:28.213255   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:20:28.213269   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:20:28.213283   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:20:28.213295   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:20:28.213309   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:20:28.213318   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:20:28.213330   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:20:28.213376   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:20:28.213407   30817 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:20:28.213416   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:20:28.213437   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:20:28.213458   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:20:28.213478   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:20:28.213515   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:20:28.213544   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:20:28.213555   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:20:28.213563   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:20:28.214069   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:20:28.240174   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:20:28.263734   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:20:28.287105   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:20:28.309725   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 00:20:28.332322   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:20:28.355111   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:20:28.379763   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:20:28.404567   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:20:28.429728   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:20:28.459960   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:20:28.482807   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:20:28.499548   30817 ssh_runner.go:195] Run: openssl version
	I0717 00:20:28.505411   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:20:28.516086   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:20:28.520576   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:20:28.520626   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:20:28.526482   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:20:28.537516   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:20:28.548573   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:20:28.552891   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:20:28.552932   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:20:28.558694   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:20:28.569433   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:20:28.580351   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:20:28.584755   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:20:28.584804   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:20:28.590170   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:20:28.601089   30817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:20:28.604969   30817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:20:28.605021   30817 kubeadm.go:392] StartCluster: {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:20:28.605110   30817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:20:28.605173   30817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:20:28.639951   30817 cri.go:89] found id: ""
	I0717 00:20:28.640015   30817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:20:28.651560   30817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:20:28.663297   30817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:20:28.674942   30817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:20:28.674964   30817 kubeadm.go:157] found existing configuration files:
	
	I0717 00:20:28.675006   30817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:20:28.683959   30817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:20:28.684041   30817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:20:28.693871   30817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:20:28.703291   30817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:20:28.703358   30817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:20:28.713021   30817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:20:28.722076   30817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:20:28.722158   30817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:20:28.731747   30817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:20:28.740446   30817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:20:28.740494   30817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:20:28.749690   30817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 00:20:29.001381   30817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:20:40.208034   30817 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:20:40.208141   30817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:20:40.208255   30817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:20:40.208345   30817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:20:40.208468   30817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:20:40.208531   30817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:20:40.210142   30817 out.go:204]   - Generating certificates and keys ...
	I0717 00:20:40.210233   30817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:20:40.210305   30817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:20:40.210370   30817 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:20:40.210452   30817 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:20:40.210530   30817 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:20:40.210601   30817 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:20:40.210688   30817 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:20:40.210845   30817 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-565881 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0717 00:20:40.210929   30817 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:20:40.211071   30817 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-565881 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0717 00:20:40.211146   30817 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:20:40.211240   30817 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:20:40.211328   30817 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:20:40.211401   30817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:20:40.211463   30817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:20:40.211516   30817 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:20:40.211563   30817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:20:40.211622   30817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:20:40.211674   30817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:20:40.211752   30817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:20:40.211810   30817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:20:40.213891   30817 out.go:204]   - Booting up control plane ...
	I0717 00:20:40.213973   30817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:20:40.214042   30817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:20:40.214102   30817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:20:40.214198   30817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:20:40.214279   30817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:20:40.214313   30817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:20:40.214465   30817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:20:40.214557   30817 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:20:40.214618   30817 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.479451ms
	I0717 00:20:40.214702   30817 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:20:40.214757   30817 kubeadm.go:310] [api-check] The API server is healthy after 6.085629153s
	I0717 00:20:40.214852   30817 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:20:40.214978   30817 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:20:40.215030   30817 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:20:40.215187   30817 kubeadm.go:310] [mark-control-plane] Marking the node ha-565881 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:20:40.215237   30817 kubeadm.go:310] [bootstrap-token] Using token: 5t00n9.la7matfwtmym5d6q
	I0717 00:20:40.216480   30817 out.go:204]   - Configuring RBAC rules ...
	I0717 00:20:40.216623   30817 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:20:40.216726   30817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:20:40.216882   30817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:20:40.217025   30817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:20:40.217157   30817 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:20:40.217252   30817 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:20:40.217351   30817 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:20:40.217419   30817 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:20:40.217470   30817 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:20:40.217477   30817 kubeadm.go:310] 
	I0717 00:20:40.217525   30817 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:20:40.217530   30817 kubeadm.go:310] 
	I0717 00:20:40.217595   30817 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:20:40.217600   30817 kubeadm.go:310] 
	I0717 00:20:40.217637   30817 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:20:40.217718   30817 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:20:40.217790   30817 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:20:40.217797   30817 kubeadm.go:310] 
	I0717 00:20:40.217841   30817 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:20:40.217846   30817 kubeadm.go:310] 
	I0717 00:20:40.217891   30817 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:20:40.217899   30817 kubeadm.go:310] 
	I0717 00:20:40.217941   30817 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:20:40.218007   30817 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:20:40.218089   30817 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:20:40.218098   30817 kubeadm.go:310] 
	I0717 00:20:40.218200   30817 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:20:40.218276   30817 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:20:40.218283   30817 kubeadm.go:310] 
	I0717 00:20:40.218388   30817 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5t00n9.la7matfwtmym5d6q \
	I0717 00:20:40.218488   30817 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 00:20:40.218513   30817 kubeadm.go:310] 	--control-plane 
	I0717 00:20:40.218518   30817 kubeadm.go:310] 
	I0717 00:20:40.218596   30817 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:20:40.218604   30817 kubeadm.go:310] 
	I0717 00:20:40.218678   30817 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5t00n9.la7matfwtmym5d6q \
	I0717 00:20:40.218791   30817 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 00:20:40.218805   30817 cni.go:84] Creating CNI manager for ""
	I0717 00:20:40.218812   30817 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 00:20:40.220276   30817 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 00:20:40.221441   30817 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 00:20:40.226975   30817 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 00:20:40.226989   30817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 00:20:40.248809   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 00:20:40.613999   30817 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:20:40.614080   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:40.614080   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565881 minikube.k8s.io/updated_at=2024_07_17T00_20_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-565881 minikube.k8s.io/primary=true
	I0717 00:20:40.833632   30817 ops.go:34] apiserver oom_adj: -16
	I0717 00:20:40.858085   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:41.359069   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:41.858639   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:42.358240   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:42.858267   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:43.358581   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:43.858396   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:44.358158   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:44.858731   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:45.359005   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:45.858396   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:46.358378   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:46.858426   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:47.358949   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:47.858961   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:48.358729   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:48.858281   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:49.358411   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:49.858416   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:50.358790   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:50.858531   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:51.358355   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:51.859038   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:52.358185   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:52.858913   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:20:52.974829   30817 kubeadm.go:1113] duration metric: took 12.360814361s to wait for elevateKubeSystemPrivileges
	I0717 00:20:52.974870   30817 kubeadm.go:394] duration metric: took 24.369853057s to StartCluster
	I0717 00:20:52.974893   30817 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:52.974971   30817 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:20:52.975840   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:52.976081   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:20:52.976094   30817 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:20:52.976119   30817 start.go:241] waiting for startup goroutines ...
	I0717 00:20:52.976132   30817 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 00:20:52.976192   30817 addons.go:69] Setting storage-provisioner=true in profile "ha-565881"
	I0717 00:20:52.976204   30817 addons.go:69] Setting default-storageclass=true in profile "ha-565881"
	I0717 00:20:52.976220   30817 addons.go:234] Setting addon storage-provisioner=true in "ha-565881"
	I0717 00:20:52.976241   30817 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565881"
	I0717 00:20:52.976251   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:20:52.976299   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:20:52.976675   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:52.976681   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:52.976699   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:52.976709   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:52.991476   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0717 00:20:52.991807   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40481
	I0717 00:20:52.991972   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:52.992149   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:52.992518   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:52.992538   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:52.992651   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:52.992670   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:52.992846   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:52.992999   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:52.993190   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:20:52.993376   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:52.993406   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:52.995211   30817 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:20:52.995468   30817 kapi.go:59] client config for ha-565881: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 00:20:52.995878   30817 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 00:20:52.996010   30817 addons.go:234] Setting addon default-storageclass=true in "ha-565881"
	I0717 00:20:52.996047   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:20:52.996298   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:52.996340   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:53.008910   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0717 00:20:53.009338   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:53.009855   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:53.009880   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:53.010232   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:53.010472   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:20:53.012004   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:53.012110   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0717 00:20:53.012463   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:53.012961   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:53.012979   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:53.013300   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:53.013829   30817 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:20:53.013854   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:53.013873   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:53.015189   30817 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:20:53.015207   30817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:20:53.015224   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:53.018311   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:53.018722   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:53.018742   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:53.018886   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:53.019064   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:53.019207   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:53.019431   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:53.028309   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44659
	I0717 00:20:53.028842   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:53.029343   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:53.029364   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:53.029650   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:53.029819   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:20:53.031187   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:20:53.031361   30817 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:20:53.031371   30817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:20:53.031387   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:20:53.033813   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:53.034139   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:20:53.034165   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:20:53.034401   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:20:53.034547   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:20:53.034672   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:20:53.034805   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:20:53.129181   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:20:53.178168   30817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:20:53.212619   30817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:20:53.633820   30817 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 00:20:53.936393   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.936421   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.936475   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.936494   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.936776   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.936792   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.936800   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.936869   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.937420   30817 main.go:141] libmachine: (ha-565881) DBG | Closing plugin on server side
	I0717 00:20:53.937475   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.937510   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.937524   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.937542   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.937555   30817 main.go:141] libmachine: (ha-565881) DBG | Closing plugin on server side
	I0717 00:20:53.937571   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.937601   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.939327   30817 main.go:141] libmachine: (ha-565881) DBG | Closing plugin on server side
	I0717 00:20:53.939365   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.939380   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.939513   30817 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 00:20:53.939526   30817 round_trippers.go:469] Request Headers:
	I0717 00:20:53.939536   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:20:53.939546   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:20:53.951855   30817 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0717 00:20:53.952521   30817 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 00:20:53.952540   30817 round_trippers.go:469] Request Headers:
	I0717 00:20:53.952550   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:20:53.952601   30817 round_trippers.go:473]     Content-Type: application/json
	I0717 00:20:53.952609   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:20:53.955743   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:20:53.955934   30817 main.go:141] libmachine: Making call to close driver server
	I0717 00:20:53.955954   30817 main.go:141] libmachine: (ha-565881) Calling .Close
	I0717 00:20:53.956244   30817 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:20:53.956272   30817 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:20:53.957791   30817 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 00:20:53.959258   30817 addons.go:510] duration metric: took 983.123512ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 00:20:53.959291   30817 start.go:246] waiting for cluster config update ...
	I0717 00:20:53.959306   30817 start.go:255] writing updated cluster config ...
	I0717 00:20:53.961199   30817 out.go:177] 
	I0717 00:20:53.962649   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:20:53.962714   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:20:53.964434   30817 out.go:177] * Starting "ha-565881-m02" control-plane node in "ha-565881" cluster
	I0717 00:20:53.965802   30817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:20:53.965826   30817 cache.go:56] Caching tarball of preloaded images
	I0717 00:20:53.965911   30817 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:20:53.965922   30817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:20:53.965987   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:20:53.966147   30817 start.go:360] acquireMachinesLock for ha-565881-m02: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:20:53.966185   30817 start.go:364] duration metric: took 20.851µs to acquireMachinesLock for "ha-565881-m02"
	I0717 00:20:53.966201   30817 start.go:93] Provisioning new machine with config: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:20:53.966271   30817 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 00:20:53.967815   30817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:20:53.967898   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:20:53.967928   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:20:53.982260   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0717 00:20:53.982677   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:20:53.983168   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:20:53.983203   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:20:53.983562   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:20:53.983765   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetMachineName
	I0717 00:20:53.983929   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:20:53.984160   30817 start.go:159] libmachine.API.Create for "ha-565881" (driver="kvm2")
	I0717 00:20:53.984194   30817 client.go:168] LocalClient.Create starting
	I0717 00:20:53.984229   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 00:20:53.984270   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:20:53.984290   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:20:53.984353   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 00:20:53.984378   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:20:53.984395   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:20:53.984419   30817 main.go:141] libmachine: Running pre-create checks...
	I0717 00:20:53.984429   30817 main.go:141] libmachine: (ha-565881-m02) Calling .PreCreateCheck
	I0717 00:20:53.984638   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetConfigRaw
	I0717 00:20:53.985083   30817 main.go:141] libmachine: Creating machine...
	I0717 00:20:53.985101   30817 main.go:141] libmachine: (ha-565881-m02) Calling .Create
	I0717 00:20:53.985244   30817 main.go:141] libmachine: (ha-565881-m02) Creating KVM machine...
	I0717 00:20:53.986591   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found existing default KVM network
	I0717 00:20:53.986772   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found existing private KVM network mk-ha-565881
	I0717 00:20:53.986915   30817 main.go:141] libmachine: (ha-565881-m02) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02 ...
	I0717 00:20:53.986939   30817 main.go:141] libmachine: (ha-565881-m02) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:20:53.986993   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:53.986884   31210 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:20:53.987068   30817 main.go:141] libmachine: (ha-565881-m02) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 00:20:54.229268   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:54.229137   31210 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa...
	I0717 00:20:54.481989   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:54.481836   31210 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/ha-565881-m02.rawdisk...
	I0717 00:20:54.482060   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Writing magic tar header
	I0717 00:20:54.482079   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Writing SSH key tar header
	I0717 00:20:54.482095   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:54.481977   31210 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02 ...
	I0717 00:20:54.482166   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02 (perms=drwx------)
	I0717 00:20:54.482185   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02
	I0717 00:20:54.482206   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 00:20:54.482222   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:20:54.482253   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:20:54.482274   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 00:20:54.482284   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 00:20:54.482299   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 00:20:54.482310   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:20:54.482320   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:20:54.482333   30817 main.go:141] libmachine: (ha-565881-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:20:54.482344   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:20:54.482352   30817 main.go:141] libmachine: (ha-565881-m02) Creating domain...
	I0717 00:20:54.482368   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Checking permissions on dir: /home
	I0717 00:20:54.482378   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Skipping /home - not owner
	I0717 00:20:54.483304   30817 main.go:141] libmachine: (ha-565881-m02) define libvirt domain using xml: 
	I0717 00:20:54.483324   30817 main.go:141] libmachine: (ha-565881-m02) <domain type='kvm'>
	I0717 00:20:54.483335   30817 main.go:141] libmachine: (ha-565881-m02)   <name>ha-565881-m02</name>
	I0717 00:20:54.483342   30817 main.go:141] libmachine: (ha-565881-m02)   <memory unit='MiB'>2200</memory>
	I0717 00:20:54.483352   30817 main.go:141] libmachine: (ha-565881-m02)   <vcpu>2</vcpu>
	I0717 00:20:54.483360   30817 main.go:141] libmachine: (ha-565881-m02)   <features>
	I0717 00:20:54.483372   30817 main.go:141] libmachine: (ha-565881-m02)     <acpi/>
	I0717 00:20:54.483380   30817 main.go:141] libmachine: (ha-565881-m02)     <apic/>
	I0717 00:20:54.483390   30817 main.go:141] libmachine: (ha-565881-m02)     <pae/>
	I0717 00:20:54.483400   30817 main.go:141] libmachine: (ha-565881-m02)     
	I0717 00:20:54.483410   30817 main.go:141] libmachine: (ha-565881-m02)   </features>
	I0717 00:20:54.483421   30817 main.go:141] libmachine: (ha-565881-m02)   <cpu mode='host-passthrough'>
	I0717 00:20:54.483464   30817 main.go:141] libmachine: (ha-565881-m02)   
	I0717 00:20:54.483497   30817 main.go:141] libmachine: (ha-565881-m02)   </cpu>
	I0717 00:20:54.483510   30817 main.go:141] libmachine: (ha-565881-m02)   <os>
	I0717 00:20:54.483520   30817 main.go:141] libmachine: (ha-565881-m02)     <type>hvm</type>
	I0717 00:20:54.483530   30817 main.go:141] libmachine: (ha-565881-m02)     <boot dev='cdrom'/>
	I0717 00:20:54.483541   30817 main.go:141] libmachine: (ha-565881-m02)     <boot dev='hd'/>
	I0717 00:20:54.483572   30817 main.go:141] libmachine: (ha-565881-m02)     <bootmenu enable='no'/>
	I0717 00:20:54.483597   30817 main.go:141] libmachine: (ha-565881-m02)   </os>
	I0717 00:20:54.483607   30817 main.go:141] libmachine: (ha-565881-m02)   <devices>
	I0717 00:20:54.483617   30817 main.go:141] libmachine: (ha-565881-m02)     <disk type='file' device='cdrom'>
	I0717 00:20:54.483632   30817 main.go:141] libmachine: (ha-565881-m02)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/boot2docker.iso'/>
	I0717 00:20:54.483643   30817 main.go:141] libmachine: (ha-565881-m02)       <target dev='hdc' bus='scsi'/>
	I0717 00:20:54.483655   30817 main.go:141] libmachine: (ha-565881-m02)       <readonly/>
	I0717 00:20:54.483668   30817 main.go:141] libmachine: (ha-565881-m02)     </disk>
	I0717 00:20:54.483687   30817 main.go:141] libmachine: (ha-565881-m02)     <disk type='file' device='disk'>
	I0717 00:20:54.483705   30817 main.go:141] libmachine: (ha-565881-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:20:54.483734   30817 main.go:141] libmachine: (ha-565881-m02)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/ha-565881-m02.rawdisk'/>
	I0717 00:20:54.483754   30817 main.go:141] libmachine: (ha-565881-m02)       <target dev='hda' bus='virtio'/>
	I0717 00:20:54.483767   30817 main.go:141] libmachine: (ha-565881-m02)     </disk>
	I0717 00:20:54.483778   30817 main.go:141] libmachine: (ha-565881-m02)     <interface type='network'>
	I0717 00:20:54.483790   30817 main.go:141] libmachine: (ha-565881-m02)       <source network='mk-ha-565881'/>
	I0717 00:20:54.483804   30817 main.go:141] libmachine: (ha-565881-m02)       <model type='virtio'/>
	I0717 00:20:54.483816   30817 main.go:141] libmachine: (ha-565881-m02)     </interface>
	I0717 00:20:54.483828   30817 main.go:141] libmachine: (ha-565881-m02)     <interface type='network'>
	I0717 00:20:54.483838   30817 main.go:141] libmachine: (ha-565881-m02)       <source network='default'/>
	I0717 00:20:54.483847   30817 main.go:141] libmachine: (ha-565881-m02)       <model type='virtio'/>
	I0717 00:20:54.483856   30817 main.go:141] libmachine: (ha-565881-m02)     </interface>
	I0717 00:20:54.483864   30817 main.go:141] libmachine: (ha-565881-m02)     <serial type='pty'>
	I0717 00:20:54.483879   30817 main.go:141] libmachine: (ha-565881-m02)       <target port='0'/>
	I0717 00:20:54.483891   30817 main.go:141] libmachine: (ha-565881-m02)     </serial>
	I0717 00:20:54.483902   30817 main.go:141] libmachine: (ha-565881-m02)     <console type='pty'>
	I0717 00:20:54.483913   30817 main.go:141] libmachine: (ha-565881-m02)       <target type='serial' port='0'/>
	I0717 00:20:54.483921   30817 main.go:141] libmachine: (ha-565881-m02)     </console>
	I0717 00:20:54.483930   30817 main.go:141] libmachine: (ha-565881-m02)     <rng model='virtio'>
	I0717 00:20:54.483941   30817 main.go:141] libmachine: (ha-565881-m02)       <backend model='random'>/dev/random</backend>
	I0717 00:20:54.483950   30817 main.go:141] libmachine: (ha-565881-m02)     </rng>
	I0717 00:20:54.483965   30817 main.go:141] libmachine: (ha-565881-m02)     
	I0717 00:20:54.483981   30817 main.go:141] libmachine: (ha-565881-m02)     
	I0717 00:20:54.483994   30817 main.go:141] libmachine: (ha-565881-m02)   </devices>
	I0717 00:20:54.484004   30817 main.go:141] libmachine: (ha-565881-m02) </domain>
	I0717 00:20:54.484038   30817 main.go:141] libmachine: (ha-565881-m02) 
	I0717 00:20:54.490515   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:7e:66:70 in network default
	I0717 00:20:54.491158   30817 main.go:141] libmachine: (ha-565881-m02) Ensuring networks are active...
	I0717 00:20:54.491184   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:54.491861   30817 main.go:141] libmachine: (ha-565881-m02) Ensuring network default is active
	I0717 00:20:54.492173   30817 main.go:141] libmachine: (ha-565881-m02) Ensuring network mk-ha-565881 is active
	I0717 00:20:54.492634   30817 main.go:141] libmachine: (ha-565881-m02) Getting domain xml...
	I0717 00:20:54.493403   30817 main.go:141] libmachine: (ha-565881-m02) Creating domain...
	I0717 00:20:55.752481   30817 main.go:141] libmachine: (ha-565881-m02) Waiting to get IP...
	I0717 00:20:55.753160   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:55.753591   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:55.753634   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:55.753577   31210 retry.go:31] will retry after 269.169887ms: waiting for machine to come up
	I0717 00:20:56.024001   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:56.024486   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:56.024521   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:56.024457   31210 retry.go:31] will retry after 235.250326ms: waiting for machine to come up
	I0717 00:20:56.261736   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:56.262142   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:56.262167   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:56.262096   31210 retry.go:31] will retry after 429.39531ms: waiting for machine to come up
	I0717 00:20:56.692788   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:56.693291   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:56.693324   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:56.693235   31210 retry.go:31] will retry after 578.982983ms: waiting for machine to come up
	I0717 00:20:57.273851   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:57.274257   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:57.274286   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:57.274229   31210 retry.go:31] will retry after 494.250759ms: waiting for machine to come up
	I0717 00:20:57.769699   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:57.770127   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:57.770161   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:57.770079   31210 retry.go:31] will retry after 683.010458ms: waiting for machine to come up
	I0717 00:20:58.454732   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:58.455161   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:58.455191   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:58.455111   31210 retry.go:31] will retry after 1.089607359s: waiting for machine to come up
	I0717 00:20:59.546879   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:20:59.547370   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:20:59.547416   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:20:59.547346   31210 retry.go:31] will retry after 1.380186146s: waiting for machine to come up
	I0717 00:21:00.929935   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:00.930446   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:00.930475   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:00.930366   31210 retry.go:31] will retry after 1.248137918s: waiting for machine to come up
	I0717 00:21:02.180983   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:02.181510   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:02.181535   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:02.181457   31210 retry.go:31] will retry after 2.268121621s: waiting for machine to come up
	I0717 00:21:04.451480   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:04.451977   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:04.452008   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:04.451910   31210 retry.go:31] will retry after 2.654411879s: waiting for machine to come up
	I0717 00:21:07.107555   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:07.108046   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:07.108079   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:07.107996   31210 retry.go:31] will retry after 3.432158661s: waiting for machine to come up
	I0717 00:21:10.542527   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:10.542978   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find current IP address of domain ha-565881-m02 in network mk-ha-565881
	I0717 00:21:10.543006   30817 main.go:141] libmachine: (ha-565881-m02) DBG | I0717 00:21:10.542923   31210 retry.go:31] will retry after 3.832769057s: waiting for machine to come up
	I0717 00:21:14.376753   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.377156   30817 main.go:141] libmachine: (ha-565881-m02) Found IP for machine: 192.168.39.14
	I0717 00:21:14.377183   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has current primary IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.377192   30817 main.go:141] libmachine: (ha-565881-m02) Reserving static IP address...
	I0717 00:21:14.377514   30817 main.go:141] libmachine: (ha-565881-m02) DBG | unable to find host DHCP lease matching {name: "ha-565881-m02", mac: "52:54:00:10:b5:c3", ip: "192.168.39.14"} in network mk-ha-565881
	I0717 00:21:14.447323   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Getting to WaitForSSH function...
	I0717 00:21:14.447353   30817 main.go:141] libmachine: (ha-565881-m02) Reserved static IP address: 192.168.39.14
	I0717 00:21:14.447365   30817 main.go:141] libmachine: (ha-565881-m02) Waiting for SSH to be available...
	I0717 00:21:14.449994   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.450435   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.450460   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.450620   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Using SSH client type: external
	I0717 00:21:14.450653   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa (-rw-------)
	I0717 00:21:14.450686   30817 main.go:141] libmachine: (ha-565881-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:21:14.450697   30817 main.go:141] libmachine: (ha-565881-m02) DBG | About to run SSH command:
	I0717 00:21:14.450705   30817 main.go:141] libmachine: (ha-565881-m02) DBG | exit 0
	I0717 00:21:14.576658   30817 main.go:141] libmachine: (ha-565881-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 00:21:14.576905   30817 main.go:141] libmachine: (ha-565881-m02) KVM machine creation complete!
	I0717 00:21:14.577174   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetConfigRaw
	I0717 00:21:14.577651   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:14.577864   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:14.577990   30817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:21:14.578004   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:21:14.579238   30817 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:21:14.579254   30817 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:21:14.579260   30817 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:21:14.579266   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:14.581509   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.581847   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.581873   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.582047   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:14.582195   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.582336   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.582472   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:14.582607   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:14.582858   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:14.582883   30817 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:21:14.683852   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:21:14.683880   30817 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:21:14.683889   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:14.686847   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.687236   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.687266   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.687450   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:14.687642   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.687792   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.687911   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:14.688042   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:14.688221   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:14.688234   30817 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:21:14.793548   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:21:14.793644   30817 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:21:14.793659   30817 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:21:14.793673   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetMachineName
	I0717 00:21:14.793986   30817 buildroot.go:166] provisioning hostname "ha-565881-m02"
	I0717 00:21:14.794012   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetMachineName
	I0717 00:21:14.794205   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:14.797055   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.797427   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.797454   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.797665   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:14.797849   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.798030   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.798192   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:14.798356   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:14.798508   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:14.798521   30817 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881-m02 && echo "ha-565881-m02" | sudo tee /etc/hostname
	I0717 00:21:14.915845   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881-m02
	
	I0717 00:21:14.915872   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:14.918674   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.919009   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:14.919035   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:14.919218   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:14.919401   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.919611   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:14.919751   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:14.919905   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:14.920108   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:14.920135   30817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:21:15.039395   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:21:15.039426   30817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:21:15.039443   30817 buildroot.go:174] setting up certificates
	I0717 00:21:15.039453   30817 provision.go:84] configureAuth start
	I0717 00:21:15.039484   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetMachineName
	I0717 00:21:15.039767   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:21:15.042348   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.042651   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.042677   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.042813   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.045027   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.045381   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.045409   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.045512   30817 provision.go:143] copyHostCerts
	I0717 00:21:15.045542   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:21:15.045577   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:21:15.045585   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:21:15.045645   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:21:15.045727   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:21:15.045743   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:21:15.045750   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:21:15.045774   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:21:15.045832   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:21:15.045848   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:21:15.045854   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:21:15.045877   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:21:15.045939   30817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881-m02 san=[127.0.0.1 192.168.39.14 ha-565881-m02 localhost minikube]
	I0717 00:21:15.186326   30817 provision.go:177] copyRemoteCerts
	I0717 00:21:15.186385   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:21:15.186408   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.188981   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.189408   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.189439   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.189612   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.189791   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.189934   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.190080   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:21:15.270806   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:21:15.270866   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:21:15.295339   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:21:15.295409   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:21:15.324354   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:21:15.324424   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:21:15.349737   30817 provision.go:87] duration metric: took 310.27257ms to configureAuth
	I0717 00:21:15.349762   30817 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:21:15.349935   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:21:15.350020   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.352329   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.352623   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.352648   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.352791   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.352976   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.353139   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.353294   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.353496   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:15.353640   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:15.353654   30817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:21:15.611222   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:21:15.611252   30817 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:21:15.611264   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetURL
	I0717 00:21:15.612630   30817 main.go:141] libmachine: (ha-565881-m02) DBG | Using libvirt version 6000000
	I0717 00:21:15.614528   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.614863   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.614890   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.615037   30817 main.go:141] libmachine: Docker is up and running!
	I0717 00:21:15.615053   30817 main.go:141] libmachine: Reticulating splines...
	I0717 00:21:15.615061   30817 client.go:171] duration metric: took 21.630857353s to LocalClient.Create
	I0717 00:21:15.615086   30817 start.go:167] duration metric: took 21.630927441s to libmachine.API.Create "ha-565881"
	I0717 00:21:15.615096   30817 start.go:293] postStartSetup for "ha-565881-m02" (driver="kvm2")
	I0717 00:21:15.615107   30817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:21:15.615133   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.615356   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:21:15.615380   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.617451   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.617831   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.617858   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.617983   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.618161   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.618333   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.618475   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:21:15.698806   30817 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:21:15.702981   30817 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:21:15.703007   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:21:15.703066   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:21:15.703153   30817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:21:15.703165   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:21:15.703274   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:21:15.712902   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:21:15.737145   30817 start.go:296] duration metric: took 122.012784ms for postStartSetup
	I0717 00:21:15.737237   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetConfigRaw
	I0717 00:21:15.737846   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:21:15.740271   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.740683   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.740715   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.740945   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:21:15.741163   30817 start.go:128] duration metric: took 21.774880748s to createHost
	I0717 00:21:15.741192   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.743833   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.744253   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.744292   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.744498   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.744671   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.744822   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.744971   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.745097   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:21:15.745252   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0717 00:21:15.745261   30817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:21:15.849161   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721175675.807023074
	
	I0717 00:21:15.849195   30817 fix.go:216] guest clock: 1721175675.807023074
	I0717 00:21:15.849205   30817 fix.go:229] Guest: 2024-07-17 00:21:15.807023074 +0000 UTC Remote: 2024-07-17 00:21:15.741179027 +0000 UTC m=+77.033871343 (delta=65.844047ms)
	I0717 00:21:15.849224   30817 fix.go:200] guest clock delta is within tolerance: 65.844047ms
	I0717 00:21:15.849229   30817 start.go:83] releasing machines lock for "ha-565881-m02", held for 21.883035485s
	I0717 00:21:15.849246   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.849521   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:21:15.851948   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.852298   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.852326   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.854403   30817 out.go:177] * Found network options:
	I0717 00:21:15.855745   30817 out.go:177]   - NO_PROXY=192.168.39.238
	W0717 00:21:15.857061   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:21:15.857088   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.857574   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.857768   30817 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:21:15.857874   30817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:21:15.857915   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	W0717 00:21:15.857996   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:21:15.858072   30817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:21:15.858092   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:21:15.860570   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.860897   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.860923   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.860984   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.861048   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.861196   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.861337   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.861477   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:15.861489   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:21:15.861499   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:15.861652   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:21:15.861786   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:21:15.861950   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:21:15.862093   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:21:16.098712   30817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:21:16.105464   30817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:21:16.105534   30817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:21:16.122753   30817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:21:16.122781   30817 start.go:495] detecting cgroup driver to use...
	I0717 00:21:16.122839   30817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:21:16.138274   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:21:16.152974   30817 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:21:16.153036   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:21:16.167520   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:21:16.181000   30817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:21:16.302425   30817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:21:16.450852   30817 docker.go:233] disabling docker service ...
	I0717 00:21:16.450912   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:21:16.465317   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:21:16.478214   30817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:21:16.621899   30817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:21:16.753063   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:21:16.767162   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:21:16.785485   30817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:21:16.785551   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.796724   30817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:21:16.796797   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.807450   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.817799   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.830141   30817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:21:16.841132   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.851542   30817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.868104   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:21:16.877936   30817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:21:16.886919   30817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:21:16.886972   30817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:21:16.899553   30817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:21:16.908759   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:21:17.021904   30817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:21:17.156470   30817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:21:17.156547   30817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:21:17.161101   30817 start.go:563] Will wait 60s for crictl version
	I0717 00:21:17.161152   30817 ssh_runner.go:195] Run: which crictl
	I0717 00:21:17.165085   30817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:21:17.209004   30817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:21:17.209083   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:21:17.239861   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:21:17.268366   30817 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:21:17.269688   30817 out.go:177]   - env NO_PROXY=192.168.39.238
	I0717 00:21:17.270947   30817 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:21:17.273446   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:17.273808   30817 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:21:08 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:21:17.273837   30817 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:21:17.274003   30817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:21:17.278302   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:21:17.291208   30817 mustload.go:65] Loading cluster: ha-565881
	I0717 00:21:17.291377   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:21:17.291612   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:21:17.291634   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:21:17.307255   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0717 00:21:17.307672   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:21:17.308186   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:21:17.308204   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:21:17.308512   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:21:17.308738   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:21:17.310197   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:21:17.310480   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:21:17.310507   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:21:17.326099   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0717 00:21:17.326523   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:21:17.326981   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:21:17.327001   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:21:17.327299   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:21:17.327460   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:21:17.327611   30817 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.14
	I0717 00:21:17.327622   30817 certs.go:194] generating shared ca certs ...
	I0717 00:21:17.327635   30817 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:21:17.327744   30817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:21:17.327781   30817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:21:17.327789   30817 certs.go:256] generating profile certs ...
	I0717 00:21:17.327848   30817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:21:17.327872   30817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.edd24c54
	I0717 00:21:17.327886   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.edd24c54 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.14 192.168.39.254]
	I0717 00:21:17.466680   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.edd24c54 ...
	I0717 00:21:17.466707   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.edd24c54: {Name:mkca826e3a25ad9472bf780c9aff1b7a7706746f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:21:17.466893   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.edd24c54 ...
	I0717 00:21:17.466909   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.edd24c54: {Name:mke091d01f37b34ad0115442b7381ff6068562db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:21:17.467003   30817 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.edd24c54 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:21:17.467242   30817 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.edd24c54 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:21:17.467495   30817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:21:17.467511   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:21:17.467525   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:21:17.467538   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:21:17.467549   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:21:17.467561   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:21:17.467572   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:21:17.467583   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:21:17.467595   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:21:17.467644   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:21:17.467671   30817 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:21:17.467680   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:21:17.467702   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:21:17.467722   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:21:17.467742   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:21:17.467775   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:21:17.467801   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:21:17.467815   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:21:17.467826   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:21:17.467854   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:21:17.470912   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:21:17.471291   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:21:17.471317   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:21:17.471473   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:21:17.471667   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:21:17.471811   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:21:17.471956   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:21:17.548903   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 00:21:17.555065   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 00:21:17.572563   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 00:21:17.576829   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 00:21:17.587396   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 00:21:17.592366   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 00:21:17.604289   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 00:21:17.608868   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 00:21:17.620021   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 00:21:17.624405   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 00:21:17.634338   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 00:21:17.638306   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 00:21:17.648687   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:21:17.673003   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:21:17.695748   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:21:17.718148   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:21:17.740998   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 00:21:17.764136   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:21:17.787524   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:21:17.811551   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:21:17.837099   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:21:17.861188   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:21:17.885630   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:21:17.909796   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 00:21:17.926407   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 00:21:17.942994   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 00:21:17.959243   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 00:21:17.975591   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 00:21:17.991629   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 00:21:18.007702   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 00:21:18.023715   30817 ssh_runner.go:195] Run: openssl version
	I0717 00:21:18.029407   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:21:18.040935   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:21:18.045390   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:21:18.045439   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:21:18.051062   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:21:18.062586   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:21:18.073376   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:21:18.078357   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:21:18.078419   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:21:18.084215   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:21:18.094970   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:21:18.105447   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:21:18.109794   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:21:18.109838   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:21:18.115321   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:21:18.126082   30817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:21:18.130095   30817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:21:18.130158   30817 kubeadm.go:934] updating node {m02 192.168.39.14 8443 v1.30.2 crio true true} ...
	I0717 00:21:18.130242   30817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:21:18.130262   30817 kube-vip.go:115] generating kube-vip config ...
	I0717 00:21:18.130291   30817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:21:18.153747   30817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:21:18.153826   30817 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:21:18.153919   30817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:21:18.166896   30817 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 00:21:18.166961   30817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 00:21:18.176849   30817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 00:21:18.176877   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:21:18.176955   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:21:18.176992   30817 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0717 00:21:18.177032   30817 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0717 00:21:18.181262   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 00:21:18.181297   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 00:21:18.735720   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:21:18.735801   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:21:18.743788   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 00:21:18.743824   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 00:21:19.087166   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:21:19.102558   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:21:19.102657   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:21:19.106839   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 00:21:19.106876   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 00:21:19.518021   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 00:21:19.528996   30817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 00:21:19.546325   30817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:21:19.563343   30817 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:21:19.581500   30817 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:21:19.585989   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:21:19.598455   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:21:19.731813   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:21:19.748573   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:21:19.749022   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:21:19.749076   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:21:19.763910   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0717 00:21:19.764403   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:21:19.764905   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:21:19.764929   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:21:19.765272   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:21:19.765452   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:21:19.765651   30817 start.go:317] joinCluster: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:21:19.765738   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 00:21:19.765762   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:21:19.768616   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:21:19.769076   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:21:19.769101   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:21:19.769316   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:21:19.769489   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:21:19.769643   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:21:19.769796   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:21:19.946621   30817 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:21:19.946669   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9nm4lz.saewglj5gs64tmcu --discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565881-m02 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443"
	I0717 00:21:43.029624   30817 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9nm4lz.saewglj5gs64tmcu --discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565881-m02 --control-plane --apiserver-advertise-address=192.168.39.14 --apiserver-bind-port=8443": (23.082929282s)
	I0717 00:21:43.029658   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 00:21:43.582797   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565881-m02 minikube.k8s.io/updated_at=2024_07_17T00_21_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-565881 minikube.k8s.io/primary=false
	I0717 00:21:43.721990   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565881-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 00:21:43.853465   30817 start.go:319] duration metric: took 24.087809331s to joinCluster
	I0717 00:21:43.853542   30817 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:21:43.853974   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:21:43.855081   30817 out.go:177] * Verifying Kubernetes components...
	I0717 00:21:43.856288   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:21:44.180404   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:21:44.253103   30817 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:21:44.253404   30817 kapi.go:59] client config for ha-565881: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 00:21:44.253462   30817 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.238:8443
	I0717 00:21:44.253707   30817 node_ready.go:35] waiting up to 6m0s for node "ha-565881-m02" to be "Ready" ...
	I0717 00:21:44.253824   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:44.253837   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:44.253848   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:44.253856   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:44.263354   30817 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 00:21:44.754358   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:44.754382   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:44.754394   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:44.754399   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:44.757655   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:45.253959   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:45.253985   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:45.253996   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:45.254001   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:45.265896   30817 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0717 00:21:45.754930   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:45.754954   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:45.754963   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:45.754971   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:45.760499   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:21:46.254468   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:46.254488   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:46.254496   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:46.254501   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:46.258680   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:46.259471   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:46.754803   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:46.754822   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:46.754831   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:46.754837   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:46.758090   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:47.254031   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:47.254065   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:47.254073   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:47.254078   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:47.256739   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:47.754688   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:47.754710   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:47.754718   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:47.754723   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:47.758191   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:48.254475   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:48.254499   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:48.254507   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:48.254513   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:48.258146   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:48.754396   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:48.754416   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:48.754424   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:48.754428   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:48.758447   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:48.759070   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:49.254387   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:49.254411   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:49.254420   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:49.254425   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:49.257800   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:49.754890   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:49.754912   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:49.754925   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:49.754928   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:49.758523   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:50.254296   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:50.254317   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:50.254324   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:50.254330   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:50.257421   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:50.754048   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:50.754070   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:50.754078   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:50.754081   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:50.757489   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:51.254264   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:51.254284   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:51.254292   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:51.254296   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:51.257543   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:51.258151   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:51.754614   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:51.754640   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:51.754651   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:51.754656   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:51.758000   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:52.254060   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:52.254081   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:52.254089   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:52.254094   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:52.257462   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:52.754781   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:52.754802   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:52.754811   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:52.754815   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:52.757846   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:53.254358   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:53.254379   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:53.254388   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:53.254391   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:53.258341   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:53.259618   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:53.754126   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:53.754145   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:53.754152   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:53.754157   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:53.757564   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:54.254041   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:54.254062   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:54.254070   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:54.254074   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:54.257155   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:54.754336   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:54.754357   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:54.754366   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:54.754369   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:54.758497   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:55.253962   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:55.253990   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:55.254000   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:55.254006   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:55.257391   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:55.754572   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:55.754593   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:55.754602   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:55.754607   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:55.757728   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:55.758177   30817 node_ready.go:53] node "ha-565881-m02" has status "Ready":"False"
	I0717 00:21:56.254691   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:56.254717   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:56.254729   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:56.254736   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:56.257897   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:56.754916   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:56.754940   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:56.754951   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:56.754958   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:56.757978   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.254531   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:57.254548   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.254556   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.254561   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.258790   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:57.259571   30817 node_ready.go:49] node "ha-565881-m02" has status "Ready":"True"
	I0717 00:21:57.259589   30817 node_ready.go:38] duration metric: took 13.005865099s for node "ha-565881-m02" to be "Ready" ...
	I0717 00:21:57.259601   30817 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:21:57.259673   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:21:57.259687   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.259696   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.259704   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.264123   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:57.269901   30817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.269970   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7wsqq
	I0717 00:21:57.269978   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.269985   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.269989   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.273955   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.275242   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:21:57.275256   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.275267   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.275273   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.278671   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.280064   30817 pod_ready.go:92] pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:57.280078   30817 pod_ready.go:81] duration metric: took 10.155563ms for pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.280087   30817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.280142   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xftzx
	I0717 00:21:57.280150   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.280157   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.280163   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.283712   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.284434   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:21:57.284451   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.284461   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.284466   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.287825   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:57.288291   30817 pod_ready.go:92] pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:57.288306   30817 pod_ready.go:81] duration metric: took 8.211559ms for pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.288314   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.288365   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881
	I0717 00:21:57.288375   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.288382   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.288386   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.294625   30817 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:21:57.295141   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:21:57.295155   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.295162   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.295166   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.297661   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:57.298177   30817 pod_ready.go:92] pod "etcd-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:57.298192   30817 pod_ready.go:81] duration metric: took 9.872878ms for pod "etcd-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.298202   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:57.298249   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:57.298256   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.298263   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.298267   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.300843   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:57.301427   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:57.301444   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.301455   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.301460   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.303773   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:57.798433   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:57.798453   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.798461   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.798465   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.800827   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:57.801344   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:57.801358   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:57.801365   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:57.801369   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:57.804962   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:58.298823   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:58.298860   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:58.298873   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:58.298879   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:58.302157   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:58.302829   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:58.302849   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:58.302860   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:58.302865   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:58.305603   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:58.798991   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:58.799016   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:58.799026   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:58.799031   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:58.803532   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:58.804326   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:58.804350   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:58.804359   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:58.804365   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:58.808312   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:59.299268   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:21:59.299293   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.299307   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.299314   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.302932   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:21:59.303686   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:59.303704   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.303715   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.303722   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.306666   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:59.307525   30817 pod_ready.go:92] pod "etcd-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:59.307543   30817 pod_ready.go:81] duration metric: took 2.009335864s for pod "etcd-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.307558   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.307612   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881
	I0717 00:21:59.307619   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.307626   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.307630   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.310245   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:59.311000   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:21:59.311018   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.311026   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.311030   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.313220   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:59.313903   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:59.313923   30817 pod_ready.go:81] duration metric: took 6.357608ms for pod "kube-apiserver-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.313934   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.455298   30817 request.go:629] Waited for 141.297144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m02
	I0717 00:21:59.455352   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m02
	I0717 00:21:59.455358   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.455363   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.455367   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.460187   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:21:59.655404   30817 request.go:629] Waited for 194.399661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:59.655465   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:21:59.655486   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.655494   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.655501   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.658387   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:21:59.659016   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:21:59.659035   30817 pod_ready.go:81] duration metric: took 345.0936ms for pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.659046   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:21:59.854757   30817 request.go:629] Waited for 195.632632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881
	I0717 00:21:59.854813   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881
	I0717 00:21:59.854820   30817 round_trippers.go:469] Request Headers:
	I0717 00:21:59.854831   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:21:59.854837   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:21:59.857693   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:22:00.054863   30817 request.go:629] Waited for 196.355581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:00.054958   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:00.054972   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.054983   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.054994   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.057789   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:22:00.058526   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:00.058550   30817 pod_ready.go:81] duration metric: took 399.493448ms for pod "kube-controller-manager-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.058564   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.254566   30817 request.go:629] Waited for 195.935874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m02
	I0717 00:22:00.254623   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m02
	I0717 00:22:00.254628   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.254635   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.254639   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.258042   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:00.454980   30817 request.go:629] Waited for 196.362323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:00.455027   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:00.455032   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.455039   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.455044   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.458367   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:00.459262   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:00.459280   30817 pod_ready.go:81] duration metric: took 400.707959ms for pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.459292   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f9rj" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.655371   30817 request.go:629] Waited for 196.019686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f9rj
	I0717 00:22:00.655427   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f9rj
	I0717 00:22:00.655433   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.655440   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.655445   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.659186   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:00.855369   30817 request.go:629] Waited for 195.349188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:00.855451   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:00.855460   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:00.855472   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:00.855480   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:00.858360   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:22:00.858883   30817 pod_ready.go:92] pod "kube-proxy-2f9rj" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:00.858902   30817 pod_ready.go:81] duration metric: took 399.60321ms for pod "kube-proxy-2f9rj" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:00.858913   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7p2jl" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.055007   30817 request.go:629] Waited for 196.028908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7p2jl
	I0717 00:22:01.055087   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7p2jl
	I0717 00:22:01.055092   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.055101   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.055105   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.058643   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:01.254694   30817 request.go:629] Waited for 195.281962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:01.254744   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:01.254749   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.254756   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.254761   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.257827   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:01.258446   30817 pod_ready.go:92] pod "kube-proxy-7p2jl" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:01.258463   30817 pod_ready.go:81] duration metric: took 399.542723ms for pod "kube-proxy-7p2jl" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.258472   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.454567   30817 request.go:629] Waited for 196.033234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881
	I0717 00:22:01.454628   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881
	I0717 00:22:01.454633   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.454642   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.454648   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.458294   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:01.655412   30817 request.go:629] Waited for 196.392771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:01.655470   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:22:01.655487   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.655499   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.655507   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.659408   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:01.660202   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:01.660221   30817 pod_ready.go:81] duration metric: took 401.743987ms for pod "kube-scheduler-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.660231   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:01.855275   30817 request.go:629] Waited for 194.980313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m02
	I0717 00:22:01.855333   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m02
	I0717 00:22:01.855337   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:01.855344   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:01.855352   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:01.857953   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:22:02.055034   30817 request.go:629] Waited for 196.390531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:02.055096   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:22:02.055101   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.055109   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.055113   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.058656   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:02.059145   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:22:02.059161   30817 pod_ready.go:81] duration metric: took 398.92395ms for pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:22:02.059172   30817 pod_ready.go:38] duration metric: took 4.799554499s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:22:02.059188   30817 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:22:02.059233   30817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:22:02.075634   30817 api_server.go:72] duration metric: took 18.222056013s to wait for apiserver process to appear ...
	I0717 00:22:02.075657   30817 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:22:02.075672   30817 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0717 00:22:02.079824   30817 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0717 00:22:02.079877   30817 round_trippers.go:463] GET https://192.168.39.238:8443/version
	I0717 00:22:02.079884   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.079893   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.079899   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.080776   30817 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:22:02.081011   30817 api_server.go:141] control plane version: v1.30.2
	I0717 00:22:02.081029   30817 api_server.go:131] duration metric: took 5.366415ms to wait for apiserver health ...
	I0717 00:22:02.081038   30817 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:22:02.255405   30817 request.go:629] Waited for 174.301249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:22:02.255476   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:22:02.255484   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.255496   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.255505   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.261058   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:22:02.265747   30817 system_pods.go:59] 17 kube-system pods found
	I0717 00:22:02.265771   30817 system_pods.go:61] "coredns-7db6d8ff4d-7wsqq" [4a433e03-decb-405d-82f1-b14a72412c8a] Running
	I0717 00:22:02.265776   30817 system_pods.go:61] "coredns-7db6d8ff4d-xftzx" [01fe6b06-0568-4da7-bd0c-1883bc99995c] Running
	I0717 00:22:02.265779   30817 system_pods.go:61] "etcd-ha-565881" [4971f520-5352-442e-b9a2-0944b0755b7f] Running
	I0717 00:22:02.265782   30817 system_pods.go:61] "etcd-ha-565881-m02" [4566d137-b6d8-4af0-8c19-db42aad855cc] Running
	I0717 00:22:02.265785   30817 system_pods.go:61] "kindnet-5lrdt" [bd3c879a-726b-40ed-ba4f-897bf43cda26] Running
	I0717 00:22:02.265788   30817 system_pods.go:61] "kindnet-k882n" [a1f0c383-2430-4479-90ad-d944476aee6f] Running
	I0717 00:22:02.265791   30817 system_pods.go:61] "kube-apiserver-ha-565881" [ef350ec6-b254-4b11-8130-fb059c05bc73] Running
	I0717 00:22:02.265794   30817 system_pods.go:61] "kube-apiserver-ha-565881-m02" [58bb06fd-18e6-4457-8bd9-82438e5d6e87] Running
	I0717 00:22:02.265798   30817 system_pods.go:61] "kube-controller-manager-ha-565881" [30ebcd5f-fb7b-4877-bc4b-e04de10a184e] Running
	I0717 00:22:02.265802   30817 system_pods.go:61] "kube-controller-manager-ha-565881-m02" [dfc4ee73-fe0f-4ec4-bdb9-3827093d3ea0] Running
	I0717 00:22:02.265804   30817 system_pods.go:61] "kube-proxy-2f9rj" [d5e16caa-15e9-4295-8a9a-0e66912f9f1b] Running
	I0717 00:22:02.265807   30817 system_pods.go:61] "kube-proxy-7p2jl" [74f5aff6-5e99-4cfe-af04-94198e8d9616] Running
	I0717 00:22:02.265810   30817 system_pods.go:61] "kube-scheduler-ha-565881" [876bc7f0-71d6-45b1-a313-d94df8f89f18] Running
	I0717 00:22:02.265813   30817 system_pods.go:61] "kube-scheduler-ha-565881-m02" [9734780b-67c9-4727-badb-f6ba028ba095] Running
	I0717 00:22:02.265816   30817 system_pods.go:61] "kube-vip-ha-565881" [7d058028-c841-4807-936f-3f81c1718a93] Running
	I0717 00:22:02.265819   30817 system_pods.go:61] "kube-vip-ha-565881-m02" [06e40aae-1d32-4577-92f5-32a6ce3e1813] Running
	I0717 00:22:02.265822   30817 system_pods.go:61] "storage-provisioner" [0aa1050a-43e1-4f7a-a2df-80cafb48e673] Running
	I0717 00:22:02.265827   30817 system_pods.go:74] duration metric: took 184.784618ms to wait for pod list to return data ...
	I0717 00:22:02.265836   30817 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:22:02.454630   30817 request.go:629] Waited for 188.73003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:22:02.454708   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:22:02.454714   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.454724   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.454732   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.459193   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:22:02.459520   30817 default_sa.go:45] found service account: "default"
	I0717 00:22:02.459540   30817 default_sa.go:55] duration metric: took 193.698798ms for default service account to be created ...
	I0717 00:22:02.459548   30817 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:22:02.655031   30817 request.go:629] Waited for 195.408916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:22:02.655134   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:22:02.655148   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.655159   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.655170   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.660880   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:22:02.664828   30817 system_pods.go:86] 17 kube-system pods found
	I0717 00:22:02.664850   30817 system_pods.go:89] "coredns-7db6d8ff4d-7wsqq" [4a433e03-decb-405d-82f1-b14a72412c8a] Running
	I0717 00:22:02.664856   30817 system_pods.go:89] "coredns-7db6d8ff4d-xftzx" [01fe6b06-0568-4da7-bd0c-1883bc99995c] Running
	I0717 00:22:02.664869   30817 system_pods.go:89] "etcd-ha-565881" [4971f520-5352-442e-b9a2-0944b0755b7f] Running
	I0717 00:22:02.664873   30817 system_pods.go:89] "etcd-ha-565881-m02" [4566d137-b6d8-4af0-8c19-db42aad855cc] Running
	I0717 00:22:02.664877   30817 system_pods.go:89] "kindnet-5lrdt" [bd3c879a-726b-40ed-ba4f-897bf43cda26] Running
	I0717 00:22:02.664880   30817 system_pods.go:89] "kindnet-k882n" [a1f0c383-2430-4479-90ad-d944476aee6f] Running
	I0717 00:22:02.664884   30817 system_pods.go:89] "kube-apiserver-ha-565881" [ef350ec6-b254-4b11-8130-fb059c05bc73] Running
	I0717 00:22:02.664889   30817 system_pods.go:89] "kube-apiserver-ha-565881-m02" [58bb06fd-18e6-4457-8bd9-82438e5d6e87] Running
	I0717 00:22:02.664893   30817 system_pods.go:89] "kube-controller-manager-ha-565881" [30ebcd5f-fb7b-4877-bc4b-e04de10a184e] Running
	I0717 00:22:02.664897   30817 system_pods.go:89] "kube-controller-manager-ha-565881-m02" [dfc4ee73-fe0f-4ec4-bdb9-3827093d3ea0] Running
	I0717 00:22:02.664900   30817 system_pods.go:89] "kube-proxy-2f9rj" [d5e16caa-15e9-4295-8a9a-0e66912f9f1b] Running
	I0717 00:22:02.664904   30817 system_pods.go:89] "kube-proxy-7p2jl" [74f5aff6-5e99-4cfe-af04-94198e8d9616] Running
	I0717 00:22:02.664908   30817 system_pods.go:89] "kube-scheduler-ha-565881" [876bc7f0-71d6-45b1-a313-d94df8f89f18] Running
	I0717 00:22:02.664911   30817 system_pods.go:89] "kube-scheduler-ha-565881-m02" [9734780b-67c9-4727-badb-f6ba028ba095] Running
	I0717 00:22:02.664915   30817 system_pods.go:89] "kube-vip-ha-565881" [7d058028-c841-4807-936f-3f81c1718a93] Running
	I0717 00:22:02.664918   30817 system_pods.go:89] "kube-vip-ha-565881-m02" [06e40aae-1d32-4577-92f5-32a6ce3e1813] Running
	I0717 00:22:02.664922   30817 system_pods.go:89] "storage-provisioner" [0aa1050a-43e1-4f7a-a2df-80cafb48e673] Running
	I0717 00:22:02.664928   30817 system_pods.go:126] duration metric: took 205.375ms to wait for k8s-apps to be running ...
	I0717 00:22:02.664937   30817 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:22:02.664977   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:22:02.682188   30817 system_svc.go:56] duration metric: took 17.242023ms WaitForService to wait for kubelet
	I0717 00:22:02.682214   30817 kubeadm.go:582] duration metric: took 18.828638273s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:22:02.682234   30817 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:22:02.854614   30817 request.go:629] Waited for 172.294632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes
	I0717 00:22:02.854676   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes
	I0717 00:22:02.854683   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:02.854694   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:02.854707   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:02.857799   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:22:02.858706   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:22:02.858733   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:22:02.858745   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:22:02.858752   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:22:02.858761   30817 node_conditions.go:105] duration metric: took 176.521225ms to run NodePressure ...
	I0717 00:22:02.858777   30817 start.go:241] waiting for startup goroutines ...
	I0717 00:22:02.858810   30817 start.go:255] writing updated cluster config ...
	I0717 00:22:02.861163   30817 out.go:177] 
	I0717 00:22:02.862755   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:22:02.862879   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:22:02.864551   30817 out.go:177] * Starting "ha-565881-m03" control-plane node in "ha-565881" cluster
	I0717 00:22:02.865908   30817 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:22:02.865931   30817 cache.go:56] Caching tarball of preloaded images
	I0717 00:22:02.866022   30817 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:22:02.866032   30817 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:22:02.866110   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:22:02.866310   30817 start.go:360] acquireMachinesLock for ha-565881-m03: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:22:02.866349   30817 start.go:364] duration metric: took 20.47µs to acquireMachinesLock for "ha-565881-m03"
	I0717 00:22:02.866362   30817 start.go:93] Provisioning new machine with config: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:22:02.866447   30817 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0717 00:22:02.867988   30817 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:22:02.868058   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:22:02.868087   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:22:02.882826   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0717 00:22:02.883258   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:22:02.883692   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:22:02.883710   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:22:02.884029   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:22:02.884205   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetMachineName
	I0717 00:22:02.884369   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:02.884545   30817 start.go:159] libmachine.API.Create for "ha-565881" (driver="kvm2")
	I0717 00:22:02.884592   30817 client.go:168] LocalClient.Create starting
	I0717 00:22:02.884625   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 00:22:02.884659   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:22:02.884674   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:22:02.884720   30817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 00:22:02.884737   30817 main.go:141] libmachine: Decoding PEM data...
	I0717 00:22:02.884746   30817 main.go:141] libmachine: Parsing certificate...
	I0717 00:22:02.884761   30817 main.go:141] libmachine: Running pre-create checks...
	I0717 00:22:02.884769   30817 main.go:141] libmachine: (ha-565881-m03) Calling .PreCreateCheck
	I0717 00:22:02.884917   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetConfigRaw
	I0717 00:22:02.885337   30817 main.go:141] libmachine: Creating machine...
	I0717 00:22:02.885351   30817 main.go:141] libmachine: (ha-565881-m03) Calling .Create
	I0717 00:22:02.885464   30817 main.go:141] libmachine: (ha-565881-m03) Creating KVM machine...
	I0717 00:22:02.886765   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found existing default KVM network
	I0717 00:22:02.886857   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found existing private KVM network mk-ha-565881
	I0717 00:22:02.887001   30817 main.go:141] libmachine: (ha-565881-m03) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03 ...
	I0717 00:22:02.887025   30817 main.go:141] libmachine: (ha-565881-m03) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:22:02.887055   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:02.886979   31596 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:22:02.887178   30817 main.go:141] libmachine: (ha-565881-m03) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 00:22:03.100976   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:03.100850   31596 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa...
	I0717 00:22:03.546788   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:03.546650   31596 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/ha-565881-m03.rawdisk...
	I0717 00:22:03.546816   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Writing magic tar header
	I0717 00:22:03.546831   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Writing SSH key tar header
	I0717 00:22:03.546841   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:03.546762   31596 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03 ...
	I0717 00:22:03.546874   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03
	I0717 00:22:03.546954   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03 (perms=drwx------)
	I0717 00:22:03.546972   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 00:22:03.546981   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:22:03.547004   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 00:22:03.547016   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 00:22:03.547025   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:22:03.547036   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 00:22:03.547044   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:22:03.547058   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:22:03.547065   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Checking permissions on dir: /home
	I0717 00:22:03.547073   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Skipping /home - not owner
	I0717 00:22:03.547084   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:22:03.547094   30817 main.go:141] libmachine: (ha-565881-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:22:03.547131   30817 main.go:141] libmachine: (ha-565881-m03) Creating domain...
	I0717 00:22:03.547999   30817 main.go:141] libmachine: (ha-565881-m03) define libvirt domain using xml: 
	I0717 00:22:03.548016   30817 main.go:141] libmachine: (ha-565881-m03) <domain type='kvm'>
	I0717 00:22:03.548026   30817 main.go:141] libmachine: (ha-565881-m03)   <name>ha-565881-m03</name>
	I0717 00:22:03.548038   30817 main.go:141] libmachine: (ha-565881-m03)   <memory unit='MiB'>2200</memory>
	I0717 00:22:03.548045   30817 main.go:141] libmachine: (ha-565881-m03)   <vcpu>2</vcpu>
	I0717 00:22:03.548051   30817 main.go:141] libmachine: (ha-565881-m03)   <features>
	I0717 00:22:03.548060   30817 main.go:141] libmachine: (ha-565881-m03)     <acpi/>
	I0717 00:22:03.548068   30817 main.go:141] libmachine: (ha-565881-m03)     <apic/>
	I0717 00:22:03.548080   30817 main.go:141] libmachine: (ha-565881-m03)     <pae/>
	I0717 00:22:03.548093   30817 main.go:141] libmachine: (ha-565881-m03)     
	I0717 00:22:03.548109   30817 main.go:141] libmachine: (ha-565881-m03)   </features>
	I0717 00:22:03.548125   30817 main.go:141] libmachine: (ha-565881-m03)   <cpu mode='host-passthrough'>
	I0717 00:22:03.548145   30817 main.go:141] libmachine: (ha-565881-m03)   
	I0717 00:22:03.548156   30817 main.go:141] libmachine: (ha-565881-m03)   </cpu>
	I0717 00:22:03.548162   30817 main.go:141] libmachine: (ha-565881-m03)   <os>
	I0717 00:22:03.548167   30817 main.go:141] libmachine: (ha-565881-m03)     <type>hvm</type>
	I0717 00:22:03.548174   30817 main.go:141] libmachine: (ha-565881-m03)     <boot dev='cdrom'/>
	I0717 00:22:03.548181   30817 main.go:141] libmachine: (ha-565881-m03)     <boot dev='hd'/>
	I0717 00:22:03.548187   30817 main.go:141] libmachine: (ha-565881-m03)     <bootmenu enable='no'/>
	I0717 00:22:03.548192   30817 main.go:141] libmachine: (ha-565881-m03)   </os>
	I0717 00:22:03.548197   30817 main.go:141] libmachine: (ha-565881-m03)   <devices>
	I0717 00:22:03.548214   30817 main.go:141] libmachine: (ha-565881-m03)     <disk type='file' device='cdrom'>
	I0717 00:22:03.548224   30817 main.go:141] libmachine: (ha-565881-m03)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/boot2docker.iso'/>
	I0717 00:22:03.548231   30817 main.go:141] libmachine: (ha-565881-m03)       <target dev='hdc' bus='scsi'/>
	I0717 00:22:03.548236   30817 main.go:141] libmachine: (ha-565881-m03)       <readonly/>
	I0717 00:22:03.548243   30817 main.go:141] libmachine: (ha-565881-m03)     </disk>
	I0717 00:22:03.548250   30817 main.go:141] libmachine: (ha-565881-m03)     <disk type='file' device='disk'>
	I0717 00:22:03.548262   30817 main.go:141] libmachine: (ha-565881-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:22:03.548285   30817 main.go:141] libmachine: (ha-565881-m03)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/ha-565881-m03.rawdisk'/>
	I0717 00:22:03.548305   30817 main.go:141] libmachine: (ha-565881-m03)       <target dev='hda' bus='virtio'/>
	I0717 00:22:03.548336   30817 main.go:141] libmachine: (ha-565881-m03)     </disk>
	I0717 00:22:03.548359   30817 main.go:141] libmachine: (ha-565881-m03)     <interface type='network'>
	I0717 00:22:03.548371   30817 main.go:141] libmachine: (ha-565881-m03)       <source network='mk-ha-565881'/>
	I0717 00:22:03.548383   30817 main.go:141] libmachine: (ha-565881-m03)       <model type='virtio'/>
	I0717 00:22:03.548392   30817 main.go:141] libmachine: (ha-565881-m03)     </interface>
	I0717 00:22:03.548402   30817 main.go:141] libmachine: (ha-565881-m03)     <interface type='network'>
	I0717 00:22:03.548413   30817 main.go:141] libmachine: (ha-565881-m03)       <source network='default'/>
	I0717 00:22:03.548423   30817 main.go:141] libmachine: (ha-565881-m03)       <model type='virtio'/>
	I0717 00:22:03.548435   30817 main.go:141] libmachine: (ha-565881-m03)     </interface>
	I0717 00:22:03.548443   30817 main.go:141] libmachine: (ha-565881-m03)     <serial type='pty'>
	I0717 00:22:03.548453   30817 main.go:141] libmachine: (ha-565881-m03)       <target port='0'/>
	I0717 00:22:03.548463   30817 main.go:141] libmachine: (ha-565881-m03)     </serial>
	I0717 00:22:03.548472   30817 main.go:141] libmachine: (ha-565881-m03)     <console type='pty'>
	I0717 00:22:03.548482   30817 main.go:141] libmachine: (ha-565881-m03)       <target type='serial' port='0'/>
	I0717 00:22:03.548491   30817 main.go:141] libmachine: (ha-565881-m03)     </console>
	I0717 00:22:03.548501   30817 main.go:141] libmachine: (ha-565881-m03)     <rng model='virtio'>
	I0717 00:22:03.548512   30817 main.go:141] libmachine: (ha-565881-m03)       <backend model='random'>/dev/random</backend>
	I0717 00:22:03.548522   30817 main.go:141] libmachine: (ha-565881-m03)     </rng>
	I0717 00:22:03.548530   30817 main.go:141] libmachine: (ha-565881-m03)     
	I0717 00:22:03.548542   30817 main.go:141] libmachine: (ha-565881-m03)     
	I0717 00:22:03.548554   30817 main.go:141] libmachine: (ha-565881-m03)   </devices>
	I0717 00:22:03.548587   30817 main.go:141] libmachine: (ha-565881-m03) </domain>
	I0717 00:22:03.548596   30817 main.go:141] libmachine: (ha-565881-m03) 
	I0717 00:22:03.554999   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:fb:f0:3d in network default
	I0717 00:22:03.555533   30817 main.go:141] libmachine: (ha-565881-m03) Ensuring networks are active...
	I0717 00:22:03.555553   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:03.556171   30817 main.go:141] libmachine: (ha-565881-m03) Ensuring network default is active
	I0717 00:22:03.556542   30817 main.go:141] libmachine: (ha-565881-m03) Ensuring network mk-ha-565881 is active
	I0717 00:22:03.556987   30817 main.go:141] libmachine: (ha-565881-m03) Getting domain xml...
	I0717 00:22:03.557752   30817 main.go:141] libmachine: (ha-565881-m03) Creating domain...
	I0717 00:22:04.806677   30817 main.go:141] libmachine: (ha-565881-m03) Waiting to get IP...
	I0717 00:22:04.807572   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:04.808016   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:04.808046   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:04.807995   31596 retry.go:31] will retry after 211.718343ms: waiting for machine to come up
	I0717 00:22:05.021438   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:05.022057   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:05.022086   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:05.022008   31596 retry.go:31] will retry after 265.863837ms: waiting for machine to come up
	I0717 00:22:05.289551   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:05.289951   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:05.289981   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:05.289890   31596 retry.go:31] will retry after 349.875152ms: waiting for machine to come up
	I0717 00:22:05.641527   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:05.642003   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:05.642032   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:05.641961   31596 retry.go:31] will retry after 607.972538ms: waiting for machine to come up
	I0717 00:22:06.251736   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:06.252197   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:06.252232   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:06.252149   31596 retry.go:31] will retry after 697.741072ms: waiting for machine to come up
	I0717 00:22:06.951013   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:06.951421   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:06.951451   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:06.951372   31596 retry.go:31] will retry after 904.364294ms: waiting for machine to come up
	I0717 00:22:07.857282   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:07.857694   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:07.857724   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:07.857653   31596 retry.go:31] will retry after 924.755324ms: waiting for machine to come up
	I0717 00:22:08.783393   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:08.783771   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:08.783792   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:08.783740   31596 retry.go:31] will retry after 1.197183629s: waiting for machine to come up
	I0717 00:22:09.983164   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:09.983593   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:09.983621   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:09.983543   31596 retry.go:31] will retry after 1.710729828s: waiting for machine to come up
	I0717 00:22:11.696577   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:11.696989   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:11.697011   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:11.696955   31596 retry.go:31] will retry after 1.417585787s: waiting for machine to come up
	I0717 00:22:13.115659   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:13.116095   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:13.116125   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:13.116045   31596 retry.go:31] will retry after 2.443611308s: waiting for machine to come up
	I0717 00:22:15.562557   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:15.562962   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:15.562989   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:15.562916   31596 retry.go:31] will retry after 2.303917621s: waiting for machine to come up
	I0717 00:22:17.868306   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:17.868726   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:17.868752   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:17.868683   31596 retry.go:31] will retry after 2.93737042s: waiting for machine to come up
	I0717 00:22:20.809508   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:20.809833   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find current IP address of domain ha-565881-m03 in network mk-ha-565881
	I0717 00:22:20.809861   30817 main.go:141] libmachine: (ha-565881-m03) DBG | I0717 00:22:20.809788   31596 retry.go:31] will retry after 5.18911505s: waiting for machine to come up
	I0717 00:22:26.001820   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:26.002387   30817 main.go:141] libmachine: (ha-565881-m03) Found IP for machine: 192.168.39.97
	I0717 00:22:26.002412   30817 main.go:141] libmachine: (ha-565881-m03) Reserving static IP address...
	I0717 00:22:26.002425   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has current primary IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:26.002888   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find host DHCP lease matching {name: "ha-565881-m03", mac: "52:54:00:43:60:7e", ip: "192.168.39.97"} in network mk-ha-565881
	I0717 00:22:26.074647   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Getting to WaitForSSH function...
	I0717 00:22:26.074675   30817 main.go:141] libmachine: (ha-565881-m03) Reserved static IP address: 192.168.39.97
	I0717 00:22:26.074686   30817 main.go:141] libmachine: (ha-565881-m03) Waiting for SSH to be available...
	I0717 00:22:26.077499   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:26.077813   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881
	I0717 00:22:26.077840   30817 main.go:141] libmachine: (ha-565881-m03) DBG | unable to find defined IP address of network mk-ha-565881 interface with MAC address 52:54:00:43:60:7e
	I0717 00:22:26.078046   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using SSH client type: external
	I0717 00:22:26.078075   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa (-rw-------)
	I0717 00:22:26.078122   30817 main.go:141] libmachine: (ha-565881-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:22:26.078152   30817 main.go:141] libmachine: (ha-565881-m03) DBG | About to run SSH command:
	I0717 00:22:26.078169   30817 main.go:141] libmachine: (ha-565881-m03) DBG | exit 0
	I0717 00:22:26.081736   30817 main.go:141] libmachine: (ha-565881-m03) DBG | SSH cmd err, output: exit status 255: 
	I0717 00:22:26.081754   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 00:22:26.081785   30817 main.go:141] libmachine: (ha-565881-m03) DBG | command : exit 0
	I0717 00:22:26.081810   30817 main.go:141] libmachine: (ha-565881-m03) DBG | err     : exit status 255
	I0717 00:22:26.081835   30817 main.go:141] libmachine: (ha-565881-m03) DBG | output  : 
	I0717 00:22:29.083044   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Getting to WaitForSSH function...
	I0717 00:22:29.085550   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.085950   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.085977   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.086093   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using SSH client type: external
	I0717 00:22:29.086117   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa (-rw-------)
	I0717 00:22:29.086146   30817 main.go:141] libmachine: (ha-565881-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:22:29.086171   30817 main.go:141] libmachine: (ha-565881-m03) DBG | About to run SSH command:
	I0717 00:22:29.086185   30817 main.go:141] libmachine: (ha-565881-m03) DBG | exit 0
	I0717 00:22:29.216890   30817 main.go:141] libmachine: (ha-565881-m03) DBG | SSH cmd err, output: <nil>: 
	I0717 00:22:29.217130   30817 main.go:141] libmachine: (ha-565881-m03) KVM machine creation complete!
	I0717 00:22:29.217425   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetConfigRaw
	I0717 00:22:29.217916   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:29.218084   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:29.218244   30817 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:22:29.218261   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:22:29.219265   30817 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:22:29.219281   30817 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:22:29.219286   30817 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:22:29.219292   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.221770   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.222160   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.222188   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.222336   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:29.222491   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.222633   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.222801   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:29.222961   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:29.223225   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:29.223244   30817 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:22:29.339981   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:22:29.340035   30817 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:22:29.340049   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.342737   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.343077   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.343101   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.343281   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:29.343467   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.343643   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.343743   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:29.343882   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:29.344075   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:29.344088   30817 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:22:29.457763   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:22:29.457867   30817 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:22:29.457886   30817 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:22:29.457904   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetMachineName
	I0717 00:22:29.458162   30817 buildroot.go:166] provisioning hostname "ha-565881-m03"
	I0717 00:22:29.458186   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetMachineName
	I0717 00:22:29.458373   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.461035   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.461444   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.461474   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.461629   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:29.461805   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.461932   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.462072   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:29.462234   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:29.462405   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:29.462418   30817 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881-m03 && echo "ha-565881-m03" | sudo tee /etc/hostname
	I0717 00:22:29.591957   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881-m03
	
	I0717 00:22:29.591990   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.594904   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.595285   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.595313   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.595472   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:29.595651   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.595825   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:29.595958   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:29.596162   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:29.596333   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:29.596351   30817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:22:29.722001   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:22:29.722027   30817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:22:29.722046   30817 buildroot.go:174] setting up certificates
	I0717 00:22:29.722055   30817 provision.go:84] configureAuth start
	I0717 00:22:29.722062   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetMachineName
	I0717 00:22:29.722320   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:22:29.724993   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.725341   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.725369   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.725486   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:29.727638   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.727941   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:29.727963   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:29.728092   30817 provision.go:143] copyHostCerts
	I0717 00:22:29.728133   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:22:29.728161   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:22:29.728170   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:22:29.728231   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:22:29.728311   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:22:29.728329   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:22:29.728335   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:22:29.728359   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:22:29.728423   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:22:29.728438   30817 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:22:29.728444   30817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:22:29.728464   30817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:22:29.728533   30817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881-m03 san=[127.0.0.1 192.168.39.97 ha-565881-m03 localhost minikube]
	I0717 00:22:30.102761   30817 provision.go:177] copyRemoteCerts
	I0717 00:22:30.102834   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:22:30.102888   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.105368   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.105688   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.105712   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.105899   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.106098   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.106261   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.106394   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:22:30.190756   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:22:30.190838   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:22:30.218145   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:22:30.218218   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:22:30.245610   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:22:30.245686   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:22:30.272316   30817 provision.go:87] duration metric: took 550.249946ms to configureAuth
	I0717 00:22:30.272341   30817 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:22:30.272532   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:22:30.272633   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.276262   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.276690   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.276715   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.276901   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.277104   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.277260   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.277375   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.277517   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:30.277667   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:30.277683   30817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:22:30.557275   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:22:30.557300   30817 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:22:30.557311   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetURL
	I0717 00:22:30.558689   30817 main.go:141] libmachine: (ha-565881-m03) DBG | Using libvirt version 6000000
	I0717 00:22:30.560704   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.561108   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.561136   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.561265   30817 main.go:141] libmachine: Docker is up and running!
	I0717 00:22:30.561279   30817 main.go:141] libmachine: Reticulating splines...
	I0717 00:22:30.561285   30817 client.go:171] duration metric: took 27.676684071s to LocalClient.Create
	I0717 00:22:30.561307   30817 start.go:167] duration metric: took 27.676764164s to libmachine.API.Create "ha-565881"
	I0717 00:22:30.561316   30817 start.go:293] postStartSetup for "ha-565881-m03" (driver="kvm2")
	I0717 00:22:30.561324   30817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:22:30.561341   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.561582   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:22:30.561610   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.563489   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.563836   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.563863   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.563967   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.564128   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.564289   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.564396   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:22:30.656469   30817 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:22:30.660891   30817 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:22:30.660912   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:22:30.660982   30817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:22:30.661071   30817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:22:30.661082   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:22:30.661189   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:22:30.671200   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:22:30.695582   30817 start.go:296] duration metric: took 134.255665ms for postStartSetup
	I0717 00:22:30.695629   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetConfigRaw
	I0717 00:22:30.696197   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:22:30.698630   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.698951   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.698983   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.699238   30817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:22:30.699526   30817 start.go:128] duration metric: took 27.833068299s to createHost
	I0717 00:22:30.699550   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.701769   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.702109   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.702135   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.702261   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.702431   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.702598   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.702713   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.702875   30817 main.go:141] libmachine: Using SSH client type: native
	I0717 00:22:30.703038   30817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 00:22:30.703052   30817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:22:30.821178   30817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721175750.800249413
	
	I0717 00:22:30.821207   30817 fix.go:216] guest clock: 1721175750.800249413
	I0717 00:22:30.821214   30817 fix.go:229] Guest: 2024-07-17 00:22:30.800249413 +0000 UTC Remote: 2024-07-17 00:22:30.699539055 +0000 UTC m=+151.992231366 (delta=100.710358ms)
	I0717 00:22:30.821235   30817 fix.go:200] guest clock delta is within tolerance: 100.710358ms
	I0717 00:22:30.821242   30817 start.go:83] releasing machines lock for "ha-565881-m03", held for 27.95488658s
	I0717 00:22:30.821268   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.821510   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:22:30.824447   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.824878   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.824919   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.826850   30817 out.go:177] * Found network options:
	I0717 00:22:30.828168   30817 out.go:177]   - NO_PROXY=192.168.39.238,192.168.39.14
	W0717 00:22:30.829541   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 00:22:30.829573   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:22:30.829591   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.830154   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.830371   30817 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:22:30.830475   30817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:22:30.830511   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	W0717 00:22:30.830589   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 00:22:30.830622   30817 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:22:30.830689   30817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:22:30.830713   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:22:30.833259   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.833280   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.833624   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.833671   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:30.833716   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.833740   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:30.833881   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.834002   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:22:30.834085   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.834148   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:22:30.834223   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.834286   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:22:30.834356   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:22:30.834405   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:22:31.070544   30817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:22:31.077562   30817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:22:31.077642   30817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:22:31.096361   30817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:22:31.096385   30817 start.go:495] detecting cgroup driver to use...
	I0717 00:22:31.096449   30817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:22:31.113441   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:22:31.128116   30817 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:22:31.128168   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:22:31.142089   30817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:22:31.157273   30817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:22:31.274897   30817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:22:31.417373   30817 docker.go:233] disabling docker service ...
	I0717 00:22:31.417435   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:22:31.432043   30817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:22:31.444871   30817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:22:31.586219   30817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:22:31.711201   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:22:31.725226   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:22:31.744010   30817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:22:31.744064   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.754493   30817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:22:31.754549   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.764857   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.774815   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.786360   30817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:22:31.797592   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.809735   30817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.827409   30817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:22:31.838541   30817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:22:31.848933   30817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:22:31.848988   30817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:22:31.863023   30817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:22:31.873177   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:22:31.996760   30817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:22:32.139217   30817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:22:32.139301   30817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:22:32.144588   30817 start.go:563] Will wait 60s for crictl version
	I0717 00:22:32.144652   30817 ssh_runner.go:195] Run: which crictl
	I0717 00:22:32.148444   30817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:22:32.194079   30817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:22:32.194170   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:22:32.227119   30817 ssh_runner.go:195] Run: crio --version
	I0717 00:22:32.257889   30817 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:22:32.259114   30817 out.go:177]   - env NO_PROXY=192.168.39.238
	I0717 00:22:32.260362   30817 out.go:177]   - env NO_PROXY=192.168.39.238,192.168.39.14
	I0717 00:22:32.261676   30817 main.go:141] libmachine: (ha-565881-m03) Calling .GetIP
	I0717 00:22:32.263900   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:32.264277   30817 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:22:32.264300   30817 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:22:32.264522   30817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:22:32.268958   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:22:32.282000   30817 mustload.go:65] Loading cluster: ha-565881
	I0717 00:22:32.282214   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:22:32.282490   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:22:32.282531   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:22:32.296902   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0717 00:22:32.297298   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:22:32.297737   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:22:32.297763   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:22:32.298097   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:22:32.298290   30817 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:22:32.300113   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:22:32.300385   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:22:32.300421   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:22:32.314892   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I0717 00:22:32.315291   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:22:32.315713   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:22:32.315733   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:22:32.315985   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:22:32.316185   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:22:32.316331   30817 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.97
	I0717 00:22:32.316344   30817 certs.go:194] generating shared ca certs ...
	I0717 00:22:32.316360   30817 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:22:32.316500   30817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:22:32.316551   30817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:22:32.316572   30817 certs.go:256] generating profile certs ...
	I0717 00:22:32.316659   30817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:22:32.316692   30817 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.65a8b113
	I0717 00:22:32.316711   30817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.65a8b113 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.14 192.168.39.97 192.168.39.254]
	I0717 00:22:32.429859   30817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.65a8b113 ...
	I0717 00:22:32.429892   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.65a8b113: {Name:mkb173c5cf13ec370191e3cf7b873ed5811cd7be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:22:32.430072   30817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.65a8b113 ...
	I0717 00:22:32.430084   30817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.65a8b113: {Name:mk641c824f290b6f90aafcb698fd5c766c8aba2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:22:32.430165   30817 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.65a8b113 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:22:32.430307   30817 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.65a8b113 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:22:32.430442   30817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:22:32.430460   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:22:32.430474   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:22:32.430489   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:22:32.430502   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:22:32.430513   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:22:32.430530   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:22:32.430544   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:22:32.430555   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:22:32.430604   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:22:32.430634   30817 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:22:32.430645   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:22:32.430670   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:22:32.430696   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:22:32.430723   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:22:32.430765   30817 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:22:32.430794   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:22:32.430809   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:22:32.430823   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:22:32.430864   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:22:32.433531   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:22:32.433903   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:22:32.433930   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:22:32.434116   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:22:32.434313   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:22:32.434460   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:22:32.434577   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:22:32.512988   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 00:22:32.518081   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 00:22:32.529751   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 00:22:32.534297   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0717 00:22:32.546070   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 00:22:32.550827   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 00:22:32.561467   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 00:22:32.565939   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0717 00:22:32.576500   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 00:22:32.581011   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 00:22:32.592147   30817 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 00:22:32.596865   30817 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 00:22:32.608689   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:22:32.637050   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:22:32.663322   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:22:32.688733   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:22:32.713967   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0717 00:22:32.740232   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:22:32.765991   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:22:32.789500   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:22:32.813392   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:22:32.840594   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:22:32.866280   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:22:32.892068   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 00:22:32.909507   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0717 00:22:32.927221   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 00:22:32.945031   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0717 00:22:32.962994   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 00:22:32.979730   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 00:22:32.996113   30817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 00:22:33.012363   30817 ssh_runner.go:195] Run: openssl version
	I0717 00:22:33.018269   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:22:33.029243   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:22:33.033500   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:22:33.033543   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:22:33.039222   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:22:33.049999   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:22:33.060608   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:22:33.065264   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:22:33.065322   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:22:33.071592   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:22:33.083902   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:22:33.095304   30817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:22:33.099722   30817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:22:33.099766   30817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:22:33.105677   30817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:22:33.116949   30817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:22:33.120835   30817 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:22:33.120884   30817 kubeadm.go:934] updating node {m03 192.168.39.97 8443 v1.30.2 crio true true} ...
	I0717 00:22:33.120966   30817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:22:33.120988   30817 kube-vip.go:115] generating kube-vip config ...
	I0717 00:22:33.121019   30817 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:22:33.138474   30817 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:22:33.138541   30817 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:22:33.138596   30817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:22:33.147765   30817 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 00:22:33.147810   30817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 00:22:33.157388   30817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0717 00:22:33.157413   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:22:33.157415   30817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 00:22:33.157429   30817 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0717 00:22:33.157435   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:22:33.157464   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:22:33.157475   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:22:33.157500   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:22:33.171689   30817 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:22:33.171743   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 00:22:33.171772   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 00:22:33.171779   30817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:22:33.171694   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 00:22:33.171882   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 00:22:33.189423   30817 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 00:22:33.189458   30817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 00:22:34.038361   30817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 00:22:34.047755   30817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 00:22:34.064851   30817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:22:34.083696   30817 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:22:34.101996   30817 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:22:34.106031   30817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:22:34.118342   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:22:34.257388   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:22:34.279588   30817 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:22:34.279924   30817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:22:34.279968   30817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:22:34.295679   30817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I0717 00:22:34.296113   30817 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:22:34.296738   30817 main.go:141] libmachine: Using API Version  1
	I0717 00:22:34.296771   30817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:22:34.297155   30817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:22:34.297334   30817 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:22:34.297539   30817 start.go:317] joinCluster: &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:22:34.297694   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 00:22:34.297714   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:22:34.301080   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:22:34.301631   30817 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:22:34.301654   30817 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:22:34.301921   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:22:34.302108   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:22:34.302261   30817 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:22:34.302408   30817 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:22:34.464709   30817 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:22:34.464765   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ulp9g8.7cfxncvt58ljnnv6 --discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565881-m03 --control-plane --apiserver-advertise-address=192.168.39.97 --apiserver-bind-port=8443"
	I0717 00:22:58.410484   30817 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ulp9g8.7cfxncvt58ljnnv6 --discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565881-m03 --control-plane --apiserver-advertise-address=192.168.39.97 --apiserver-bind-port=8443": (23.94569319s)
	I0717 00:22:58.410524   30817 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 00:22:58.930350   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565881-m03 minikube.k8s.io/updated_at=2024_07_17T00_22_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=ha-565881 minikube.k8s.io/primary=false
	I0717 00:22:59.059327   30817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565881-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 00:22:59.190930   30817 start.go:319] duration metric: took 24.893385889s to joinCluster
	I0717 00:22:59.191009   30817 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:22:59.191370   30817 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:22:59.192680   30817 out.go:177] * Verifying Kubernetes components...
	I0717 00:22:59.194358   30817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:22:59.478074   30817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:22:59.513516   30817 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:22:59.513836   30817 kapi.go:59] client config for ha-565881: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 00:22:59.513912   30817 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.238:8443
	I0717 00:22:59.514182   30817 node_ready.go:35] waiting up to 6m0s for node "ha-565881-m03" to be "Ready" ...
	I0717 00:22:59.514255   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:22:59.514265   30817 round_trippers.go:469] Request Headers:
	I0717 00:22:59.514280   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:22:59.514289   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:22:59.517540   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:00.014851   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:00.014874   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:00.014883   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:00.014891   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:00.018750   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:00.514832   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:00.514858   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:00.514870   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:00.514874   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:00.519444   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:01.014782   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:01.014805   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:01.014813   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:01.014817   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:01.018702   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:01.514670   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:01.514698   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:01.514706   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:01.514709   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:01.519010   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:01.519823   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:02.015202   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:02.015226   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:02.015237   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:02.015245   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:02.018448   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:02.514669   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:02.514692   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:02.514699   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:02.514703   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:02.518679   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:03.015337   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:03.015357   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:03.015365   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:03.015368   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:03.019663   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:03.514346   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:03.514367   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:03.514374   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:03.514378   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:03.517411   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:04.014511   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:04.014529   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:04.014537   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:04.014542   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:04.018629   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:04.020041   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:04.514874   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:04.514898   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:04.514907   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:04.514910   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:04.518316   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:05.015002   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:05.015026   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:05.015042   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:05.015047   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:05.018792   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:05.514817   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:05.514843   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:05.514856   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:05.514862   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:05.520005   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:23:06.015193   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:06.015216   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:06.015226   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:06.015232   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:06.019436   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:06.020540   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:06.514977   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:06.514997   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:06.515005   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:06.515010   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:06.518528   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:07.014508   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:07.014530   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:07.014550   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:07.014554   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:07.017786   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:07.514542   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:07.514564   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:07.514571   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:07.514576   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:07.518371   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:08.014796   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:08.014822   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:08.014832   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:08.014837   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:08.019112   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:08.515154   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:08.515183   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:08.515193   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:08.515199   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:08.518568   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:08.519433   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:09.014980   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:09.015002   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:09.015017   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:09.015022   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:09.019391   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:09.515090   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:09.515112   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:09.515120   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:09.515124   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:09.519083   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:10.014440   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:10.014471   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:10.014479   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:10.014483   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:10.017804   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:10.514764   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:10.514785   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:10.514793   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:10.514796   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:10.518279   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:11.015416   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:11.015437   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:11.015446   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:11.015451   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:11.019155   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:11.019686   30817 node_ready.go:53] node "ha-565881-m03" has status "Ready":"False"
	I0717 00:23:11.515170   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:11.515208   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:11.515218   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:11.515224   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:11.519110   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:12.015019   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:12.015042   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:12.015052   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:12.015058   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:12.019573   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:12.514641   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:12.514674   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:12.514682   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:12.514685   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:12.518420   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.015241   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.015261   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.015269   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.015273   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.018764   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.019369   30817 node_ready.go:49] node "ha-565881-m03" has status "Ready":"True"
	I0717 00:23:13.019387   30817 node_ready.go:38] duration metric: took 13.505188759s for node "ha-565881-m03" to be "Ready" ...
	I0717 00:23:13.019394   30817 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:23:13.019453   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:13.019465   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.019472   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.019477   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.026342   30817 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:23:13.035633   30817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.035728   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7wsqq
	I0717 00:23:13.035741   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.035751   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.035760   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.038501   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.039113   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:13.039127   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.039133   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.039138   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.041530   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.042212   30817 pod_ready.go:92] pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:13.042235   30817 pod_ready.go:81] duration metric: took 6.575818ms for pod "coredns-7db6d8ff4d-7wsqq" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.042245   30817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.042304   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xftzx
	I0717 00:23:13.042315   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.042325   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.042335   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.045410   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.045900   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:13.045917   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.045925   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.045929   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.048290   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.048764   30817 pod_ready.go:92] pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:13.048780   30817 pod_ready.go:81] duration metric: took 6.528388ms for pod "coredns-7db6d8ff4d-xftzx" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.048791   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.048849   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881
	I0717 00:23:13.048861   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.048870   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.048876   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.051348   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.051796   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:13.051808   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.051815   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.051819   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.054698   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:13.055559   30817 pod_ready.go:92] pod "etcd-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:13.055578   30817 pod_ready.go:81] duration metric: took 6.779522ms for pod "etcd-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.055590   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.055646   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m02
	I0717 00:23:13.055656   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.055666   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.055674   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.059245   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.060123   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:13.060141   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.060151   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.060156   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.072051   30817 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0717 00:23:13.072588   30817 pod_ready.go:92] pod "etcd-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:13.072607   30817 pod_ready.go:81] duration metric: took 17.009719ms for pod "etcd-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.072616   30817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:13.215991   30817 request.go:629] Waited for 143.316913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:13.216073   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:13.216080   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.216092   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.216103   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.220188   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:13.415421   30817 request.go:629] Waited for 194.29659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.415482   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.415489   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.415497   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.415501   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.419268   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.615369   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:13.615389   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.615397   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.615402   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.618753   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:13.815456   30817 request.go:629] Waited for 196.064615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.815542   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:13.815548   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:13.815556   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:13.815565   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:13.819217   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:14.073709   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:14.073731   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:14.073739   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:14.073745   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:14.076969   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:14.216213   30817 request.go:629] Waited for 138.237276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:14.216278   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:14.216286   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:14.216295   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:14.216300   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:14.219940   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:14.573255   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:14.573279   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:14.573289   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:14.573294   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:14.577343   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:14.615374   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:14.615408   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:14.615416   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:14.615421   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:14.618773   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.073373   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565881-m03
	I0717 00:23:15.073395   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.073406   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.073412   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.077186   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.078010   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:15.078029   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.078039   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.078046   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.080986   30817 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:23:15.081634   30817 pod_ready.go:92] pod "etcd-ha-565881-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:15.081652   30817 pod_ready.go:81] duration metric: took 2.009029844s for pod "etcd-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.081668   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.216016   30817 request.go:629] Waited for 134.296133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881
	I0717 00:23:15.216072   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881
	I0717 00:23:15.216077   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.216084   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.216089   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.219511   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.415725   30817 request.go:629] Waited for 195.353261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:15.415778   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:15.415783   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.415791   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.415797   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.419068   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.419812   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:15.419837   30817 pod_ready.go:81] duration metric: took 338.159133ms for pod "kube-apiserver-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.419851   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.615891   30817 request.go:629] Waited for 195.979681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m02
	I0717 00:23:15.616011   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m02
	I0717 00:23:15.616021   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.616028   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.616033   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.619567   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.815604   30817 request.go:629] Waited for 195.354554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:15.815667   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:15.815672   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:15.815680   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:15.815686   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:15.819581   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:15.820216   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:15.820238   30817 pod_ready.go:81] duration metric: took 400.379052ms for pod "kube-apiserver-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:15.820250   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.015261   30817 request.go:629] Waited for 194.946962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m03
	I0717 00:23:16.015322   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881-m03
	I0717 00:23:16.015327   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.015335   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.015340   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.020361   30817 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:23:16.215777   30817 request.go:629] Waited for 194.358244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:16.215858   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:16.215866   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.215878   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.215886   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.219553   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:16.220680   30817 pod_ready.go:92] pod "kube-apiserver-ha-565881-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:16.220701   30817 pod_ready.go:81] duration metric: took 400.441569ms for pod "kube-apiserver-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.220711   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.415806   30817 request.go:629] Waited for 195.030033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881
	I0717 00:23:16.415868   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881
	I0717 00:23:16.415873   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.415881   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.415884   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.419707   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:16.615765   30817 request.go:629] Waited for 195.369569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:16.615830   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:16.615835   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.615842   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.615847   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.619918   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:16.620498   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:16.620518   30817 pod_ready.go:81] duration metric: took 399.798082ms for pod "kube-controller-manager-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.620531   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:16.815644   30817 request.go:629] Waited for 195.032644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m02
	I0717 00:23:16.815702   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m02
	I0717 00:23:16.815709   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:16.815716   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:16.815723   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:16.818996   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.015998   30817 request.go:629] Waited for 196.358363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:17.016111   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:17.016122   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.016130   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.016134   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.019563   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.020035   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:17.020057   30817 pod_ready.go:81] duration metric: took 399.517092ms for pod "kube-controller-manager-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.020070   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.216169   30817 request.go:629] Waited for 196.033808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m03
	I0717 00:23:17.216246   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565881-m03
	I0717 00:23:17.216251   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.216258   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.216266   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.220549   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:17.415628   30817 request.go:629] Waited for 193.57967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:17.415685   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:17.415690   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.415698   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.415702   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.419208   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.419940   30817 pod_ready.go:92] pod "kube-controller-manager-ha-565881-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:17.419958   30817 pod_ready.go:81] duration metric: took 399.881416ms for pod "kube-controller-manager-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.419969   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2f9rj" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.616061   30817 request.go:629] Waited for 196.018703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f9rj
	I0717 00:23:17.616123   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2f9rj
	I0717 00:23:17.616129   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.616137   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.616142   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.619667   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.815557   30817 request.go:629] Waited for 195.164155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:17.815610   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:17.815618   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:17.815625   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:17.815630   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:17.818946   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:17.819794   30817 pod_ready.go:92] pod "kube-proxy-2f9rj" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:17.819813   30817 pod_ready.go:81] duration metric: took 399.826808ms for pod "kube-proxy-2f9rj" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:17.819826   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7p2jl" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.016159   30817 request.go:629] Waited for 196.266113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7p2jl
	I0717 00:23:18.016245   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7p2jl
	I0717 00:23:18.016257   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.016268   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.016277   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.019661   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:18.215718   30817 request.go:629] Waited for 195.353457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:18.215791   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:18.215798   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.215809   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.215814   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.219415   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:18.220029   30817 pod_ready.go:92] pod "kube-proxy-7p2jl" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:18.220049   30817 pod_ready.go:81] duration metric: took 400.214022ms for pod "kube-proxy-7p2jl" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.220059   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k5x6x" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.416062   30817 request.go:629] Waited for 195.938205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k5x6x
	I0717 00:23:18.416119   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k5x6x
	I0717 00:23:18.416125   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.416131   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.416135   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.420688   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:18.615740   30817 request.go:629] Waited for 194.365134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:18.615819   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:18.615830   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.615838   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.615845   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.619901   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:18.620633   30817 pod_ready.go:92] pod "kube-proxy-k5x6x" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:18.620654   30817 pod_ready.go:81] duration metric: took 400.588373ms for pod "kube-proxy-k5x6x" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.620667   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:18.816026   30817 request.go:629] Waited for 195.241694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881
	I0717 00:23:18.816085   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881
	I0717 00:23:18.816090   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:18.816098   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:18.816101   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:18.819500   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:19.015301   30817 request.go:629] Waited for 194.805861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:19.015391   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881
	I0717 00:23:19.015405   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.015413   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.015417   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.019741   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:19.020440   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:19.020462   30817 pod_ready.go:81] duration metric: took 399.785274ms for pod "kube-scheduler-ha-565881" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.020475   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.215528   30817 request.go:629] Waited for 194.97553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m02
	I0717 00:23:19.215589   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m02
	I0717 00:23:19.215598   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.215605   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.215609   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.219123   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:19.416233   30817 request.go:629] Waited for 196.398252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:19.416281   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02
	I0717 00:23:19.416287   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.416294   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.416299   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.419669   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:19.420418   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:19.420438   30817 pod_ready.go:81] duration metric: took 399.955187ms for pod "kube-scheduler-ha-565881-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.420447   30817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.615368   30817 request.go:629] Waited for 194.859433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m03
	I0717 00:23:19.615436   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565881-m03
	I0717 00:23:19.615441   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.615449   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.615453   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.619062   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:19.816298   30817 request.go:629] Waited for 196.280861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:19.816381   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-565881-m03
	I0717 00:23:19.816389   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.816404   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.816414   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.820466   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:19.821245   30817 pod_ready.go:92] pod "kube-scheduler-ha-565881-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:23:19.821286   30817 pod_ready.go:81] duration metric: took 400.822243ms for pod "kube-scheduler-ha-565881-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:23:19.821306   30817 pod_ready.go:38] duration metric: took 6.801901637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:23:19.821328   30817 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:23:19.821397   30817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:23:19.839119   30817 api_server.go:72] duration metric: took 20.648070367s to wait for apiserver process to appear ...
	I0717 00:23:19.839144   30817 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:23:19.839165   30817 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0717 00:23:19.843248   30817 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0717 00:23:19.843334   30817 round_trippers.go:463] GET https://192.168.39.238:8443/version
	I0717 00:23:19.843344   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:19.843352   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:19.843359   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:19.844189   30817 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:23:19.844258   30817 api_server.go:141] control plane version: v1.30.2
	I0717 00:23:19.844275   30817 api_server.go:131] duration metric: took 5.124245ms to wait for apiserver health ...
	I0717 00:23:19.844286   30817 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:23:20.015736   30817 request.go:629] Waited for 171.346584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:20.015793   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:20.015798   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:20.015806   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:20.015811   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:20.022896   30817 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:23:20.029400   30817 system_pods.go:59] 24 kube-system pods found
	I0717 00:23:20.029427   30817 system_pods.go:61] "coredns-7db6d8ff4d-7wsqq" [4a433e03-decb-405d-82f1-b14a72412c8a] Running
	I0717 00:23:20.029438   30817 system_pods.go:61] "coredns-7db6d8ff4d-xftzx" [01fe6b06-0568-4da7-bd0c-1883bc99995c] Running
	I0717 00:23:20.029442   30817 system_pods.go:61] "etcd-ha-565881" [4971f520-5352-442e-b9a2-0944b0755b7f] Running
	I0717 00:23:20.029446   30817 system_pods.go:61] "etcd-ha-565881-m02" [4566d137-b6d8-4af0-8c19-db42aad855cc] Running
	I0717 00:23:20.029450   30817 system_pods.go:61] "etcd-ha-565881-m03" [dada7623-e9d0-4848-a760-4a0a7f63990e] Running
	I0717 00:23:20.029453   30817 system_pods.go:61] "kindnet-5lrdt" [bd3c879a-726b-40ed-ba4f-897bf43cda26] Running
	I0717 00:23:20.029456   30817 system_pods.go:61] "kindnet-ctstx" [84c6251a-f4d9-4bd5-813e-52c72e3a5a83] Running
	I0717 00:23:20.029459   30817 system_pods.go:61] "kindnet-k882n" [a1f0c383-2430-4479-90ad-d944476aee6f] Running
	I0717 00:23:20.029462   30817 system_pods.go:61] "kube-apiserver-ha-565881" [ef350ec6-b254-4b11-8130-fb059c05bc73] Running
	I0717 00:23:20.029468   30817 system_pods.go:61] "kube-apiserver-ha-565881-m02" [58bb06fd-18e6-4457-8bd9-82438e5d6e87] Running
	I0717 00:23:20.029471   30817 system_pods.go:61] "kube-apiserver-ha-565881-m03" [f4678e70-6416-4623-a8b1-ddb0a1c31843] Running
	I0717 00:23:20.029476   30817 system_pods.go:61] "kube-controller-manager-ha-565881" [30ebcd5f-fb7b-4877-bc4b-e04de10a184e] Running
	I0717 00:23:20.029480   30817 system_pods.go:61] "kube-controller-manager-ha-565881-m02" [dfc4ee73-fe0f-4ec4-bdb9-3827093d3ea0] Running
	I0717 00:23:20.029491   30817 system_pods.go:61] "kube-controller-manager-ha-565881-m03" [8f256263-ae87-4500-9367-bbdfe67effd6] Running
	I0717 00:23:20.029494   30817 system_pods.go:61] "kube-proxy-2f9rj" [d5e16caa-15e9-4295-8a9a-0e66912f9f1b] Running
	I0717 00:23:20.029497   30817 system_pods.go:61] "kube-proxy-7p2jl" [74f5aff6-5e99-4cfe-af04-94198e8d9616] Running
	I0717 00:23:20.029500   30817 system_pods.go:61] "kube-proxy-k5x6x" [d6bf8a53-e66d-4e97-b1f4-470c70ee87e2] Running
	I0717 00:23:20.029503   30817 system_pods.go:61] "kube-scheduler-ha-565881" [876bc7f0-71d6-45b1-a313-d94df8f89f18] Running
	I0717 00:23:20.029506   30817 system_pods.go:61] "kube-scheduler-ha-565881-m02" [9734780b-67c9-4727-badb-f6ba028ba095] Running
	I0717 00:23:20.029509   30817 system_pods.go:61] "kube-scheduler-ha-565881-m03" [5e074a3c-dff5-4df9-aa3b-deb2e8e6cdde] Running
	I0717 00:23:20.029512   30817 system_pods.go:61] "kube-vip-ha-565881" [7d058028-c841-4807-936f-3f81c1718a93] Running
	I0717 00:23:20.029515   30817 system_pods.go:61] "kube-vip-ha-565881-m02" [06e40aae-1d32-4577-92f5-32a6ce3e1813] Running
	I0717 00:23:20.029518   30817 system_pods.go:61] "kube-vip-ha-565881-m03" [85f81bf9-9465-4eaf-ba50-7aac4090d563] Running
	I0717 00:23:20.029523   30817 system_pods.go:61] "storage-provisioner" [0aa1050a-43e1-4f7a-a2df-80cafb48e673] Running
	I0717 00:23:20.029531   30817 system_pods.go:74] duration metric: took 185.238424ms to wait for pod list to return data ...
	I0717 00:23:20.029541   30817 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:23:20.215985   30817 request.go:629] Waited for 186.373366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:23:20.216060   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:23:20.216066   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:20.216073   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:20.216080   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:20.219992   30817 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:23:20.220125   30817 default_sa.go:45] found service account: "default"
	I0717 00:23:20.220141   30817 default_sa.go:55] duration metric: took 190.590459ms for default service account to be created ...
	I0717 00:23:20.220151   30817 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:23:20.415501   30817 request.go:629] Waited for 195.283071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:20.415586   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0717 00:23:20.415618   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:20.415630   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:20.415634   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:20.422110   30817 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:23:20.430762   30817 system_pods.go:86] 24 kube-system pods found
	I0717 00:23:20.430788   30817 system_pods.go:89] "coredns-7db6d8ff4d-7wsqq" [4a433e03-decb-405d-82f1-b14a72412c8a] Running
	I0717 00:23:20.430793   30817 system_pods.go:89] "coredns-7db6d8ff4d-xftzx" [01fe6b06-0568-4da7-bd0c-1883bc99995c] Running
	I0717 00:23:20.430797   30817 system_pods.go:89] "etcd-ha-565881" [4971f520-5352-442e-b9a2-0944b0755b7f] Running
	I0717 00:23:20.430801   30817 system_pods.go:89] "etcd-ha-565881-m02" [4566d137-b6d8-4af0-8c19-db42aad855cc] Running
	I0717 00:23:20.430804   30817 system_pods.go:89] "etcd-ha-565881-m03" [dada7623-e9d0-4848-a760-4a0a7f63990e] Running
	I0717 00:23:20.430808   30817 system_pods.go:89] "kindnet-5lrdt" [bd3c879a-726b-40ed-ba4f-897bf43cda26] Running
	I0717 00:23:20.430812   30817 system_pods.go:89] "kindnet-ctstx" [84c6251a-f4d9-4bd5-813e-52c72e3a5a83] Running
	I0717 00:23:20.430816   30817 system_pods.go:89] "kindnet-k882n" [a1f0c383-2430-4479-90ad-d944476aee6f] Running
	I0717 00:23:20.430819   30817 system_pods.go:89] "kube-apiserver-ha-565881" [ef350ec6-b254-4b11-8130-fb059c05bc73] Running
	I0717 00:23:20.430824   30817 system_pods.go:89] "kube-apiserver-ha-565881-m02" [58bb06fd-18e6-4457-8bd9-82438e5d6e87] Running
	I0717 00:23:20.430828   30817 system_pods.go:89] "kube-apiserver-ha-565881-m03" [f4678e70-6416-4623-a8b1-ddb0a1c31843] Running
	I0717 00:23:20.430834   30817 system_pods.go:89] "kube-controller-manager-ha-565881" [30ebcd5f-fb7b-4877-bc4b-e04de10a184e] Running
	I0717 00:23:20.430840   30817 system_pods.go:89] "kube-controller-manager-ha-565881-m02" [dfc4ee73-fe0f-4ec4-bdb9-3827093d3ea0] Running
	I0717 00:23:20.430847   30817 system_pods.go:89] "kube-controller-manager-ha-565881-m03" [8f256263-ae87-4500-9367-bbdfe67effd6] Running
	I0717 00:23:20.430856   30817 system_pods.go:89] "kube-proxy-2f9rj" [d5e16caa-15e9-4295-8a9a-0e66912f9f1b] Running
	I0717 00:23:20.430862   30817 system_pods.go:89] "kube-proxy-7p2jl" [74f5aff6-5e99-4cfe-af04-94198e8d9616] Running
	I0717 00:23:20.430871   30817 system_pods.go:89] "kube-proxy-k5x6x" [d6bf8a53-e66d-4e97-b1f4-470c70ee87e2] Running
	I0717 00:23:20.430878   30817 system_pods.go:89] "kube-scheduler-ha-565881" [876bc7f0-71d6-45b1-a313-d94df8f89f18] Running
	I0717 00:23:20.430887   30817 system_pods.go:89] "kube-scheduler-ha-565881-m02" [9734780b-67c9-4727-badb-f6ba028ba095] Running
	I0717 00:23:20.430893   30817 system_pods.go:89] "kube-scheduler-ha-565881-m03" [5e074a3c-dff5-4df9-aa3b-deb2e8e6cdde] Running
	I0717 00:23:20.430899   30817 system_pods.go:89] "kube-vip-ha-565881" [7d058028-c841-4807-936f-3f81c1718a93] Running
	I0717 00:23:20.430907   30817 system_pods.go:89] "kube-vip-ha-565881-m02" [06e40aae-1d32-4577-92f5-32a6ce3e1813] Running
	I0717 00:23:20.430913   30817 system_pods.go:89] "kube-vip-ha-565881-m03" [85f81bf9-9465-4eaf-ba50-7aac4090d563] Running
	I0717 00:23:20.430921   30817 system_pods.go:89] "storage-provisioner" [0aa1050a-43e1-4f7a-a2df-80cafb48e673] Running
	I0717 00:23:20.430927   30817 system_pods.go:126] duration metric: took 210.770682ms to wait for k8s-apps to be running ...
	I0717 00:23:20.430936   30817 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:23:20.430982   30817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:23:20.446693   30817 system_svc.go:56] duration metric: took 15.749024ms WaitForService to wait for kubelet
	I0717 00:23:20.446720   30817 kubeadm.go:582] duration metric: took 21.255674297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:23:20.446754   30817 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:23:20.616184   30817 request.go:629] Waited for 169.340619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes
	I0717 00:23:20.616242   30817 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes
	I0717 00:23:20.616247   30817 round_trippers.go:469] Request Headers:
	I0717 00:23:20.616254   30817 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:23:20.616258   30817 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:23:20.620476   30817 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:23:20.622374   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:23:20.622400   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:23:20.622414   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:23:20.622418   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:23:20.622423   30817 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:23:20.622428   30817 node_conditions.go:123] node cpu capacity is 2
	I0717 00:23:20.622433   30817 node_conditions.go:105] duration metric: took 175.670539ms to run NodePressure ...
	I0717 00:23:20.622449   30817 start.go:241] waiting for startup goroutines ...
	I0717 00:23:20.622474   30817 start.go:255] writing updated cluster config ...
	I0717 00:23:20.622902   30817 ssh_runner.go:195] Run: rm -f paused
	I0717 00:23:20.675499   30817 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:23:20.678010   30817 out.go:177] * Done! kubectl is now configured to use "ha-565881" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.526354262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e75ea65-8b21-4290-b93a-d5261cb8124e name=/runtime.v1.RuntimeService/Version
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.527640548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a9018b1-34ea-4d34-a971-09390e82ce6f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.528371331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176079528345631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a9018b1-34ea-4d34-a971-09390e82ce6f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.529169188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8059428f-0813-4be7-9e3b-b3b4ecd8cb0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.529265401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8059428f-0813-4be7-9e3b-b3b4ecd8cb0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.529780559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721175803248450444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667828411216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c,PodSandboxId:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667809002075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bef5d657a6cb69965245c2615be216b56d82ab4763232390ed306790434354,PodSandboxId:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721175667689999819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721175655675663031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175653
514923868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5,PodSandboxId:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172117563523
0999243,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175633405344562,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175633392337218,Labels:map[string]string{io.kubernetes.container.name: et
cd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c,PodSandboxId:bd261c9ae650e8f175c47bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175633365293199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff,PodSandboxId:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175633277908414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8059428f-0813-4be7-9e3b-b3b4ecd8cb0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.554891370Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e2be342-e3a8-47c3-9f70-f39baa8492e3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.555146408Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-sxdsp,Uid:7a532a93-0ab1-4911-b7f5-9d85eda2be75,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175801959385834,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:23:21.627315007Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xftzx,Uid:01fe6b06-0568-4da7-bd0c-1883bc99995c,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721175667543103356,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:21:07.214009072Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7wsqq,Uid:4a433e03-decb-405d-82f1-b14a72412c8a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175667539564286,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-07-17T00:21:07.213868280Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0aa1050a-43e1-4f7a-a2df-80cafb48e673,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175667516880822,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T00:21:07.209314242Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&PodSandboxMetadata{Name:kube-proxy-7p2jl,Uid:74f5aff6-5e99-4cfe-af04-94198e8d9616,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175653220303170,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-17T00:20:52.887845117Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&PodSandboxMetadata{Name:kindnet-5lrdt,Uid:bd3c879a-726b-40ed-ba4f-897bf43cda26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175653218014976,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:20:52.903992589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&PodSandboxMetadata{Name:etcd-ha-565881,Uid:5f82fe075280b90a17d8f04a23fc7629,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721175633118938420,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.238:2379,kubernetes.io/config.hash: 5f82fe075280b90a17d8f04a23fc7629,kubernetes.io/config.seen: 2024-07-17T00:20:32.635373262Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-565881,Uid:137a148a990fa52e8281e355098ea021,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175633108917952,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a99
0fa52e8281e355098ea021,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.238:8443,kubernetes.io/config.hash: 137a148a990fa52e8281e355098ea021,kubernetes.io/config.seen: 2024-07-17T00:20:32.635374808Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-565881,Uid:22442ecb09ab7532c1c9a7afada397a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175633104462537,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{kubernetes.io/config.hash: 22442ecb09ab7532c1c9a7afada397a4,kubernetes.io/config.seen: 2024-07-17T00:20:32.635371479Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bd261c9ae650e8f175c4
7bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-565881,Uid:960ed960c6610568e154d20884b393df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175633099458806,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 960ed960c6610568e154d20884b393df,kubernetes.io/config.seen: 2024-07-17T00:20:32.635376290Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565881,Uid:b826e45ce780868932f8d9a5a17c6b9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721175633092007394,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b826e45ce780868932f8d9a5a17c6b9c,kubernetes.io/config.seen: 2024-07-17T00:20:32.635367069Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7e2be342-e3a8-47c3-9f70-f39baa8492e3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.555844472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8509a12e-d9af-47ed-933a-b1db32c76e7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.555899128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8509a12e-d9af-47ed-933a-b1db32c76e7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.556123418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721175803248450444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667828411216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c,PodSandboxId:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667809002075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bef5d657a6cb69965245c2615be216b56d82ab4763232390ed306790434354,PodSandboxId:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721175667689999819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721175655675663031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175653
514923868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5,PodSandboxId:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172117563523
0999243,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175633405344562,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175633392337218,Labels:map[string]string{io.kubernetes.container.name: et
cd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c,PodSandboxId:bd261c9ae650e8f175c47bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175633365293199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff,PodSandboxId:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175633277908414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8509a12e-d9af-47ed-933a-b1db32c76e7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.572382817Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69d8a251-8975-4d00-84e1-fabc2fe3a6d6 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.572455009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69d8a251-8975-4d00-84e1-fabc2fe3a6d6 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.573677197Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=435671cd-3b16-4755-a236-e89ae9c15d0c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.574238674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176079574215824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=435671cd-3b16-4755-a236-e89ae9c15d0c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.575195478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28addd7e-b9bb-4e8f-91ba-f48a43491a70 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.575250258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28addd7e-b9bb-4e8f-91ba-f48a43491a70 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.575633003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721175803248450444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667828411216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c,PodSandboxId:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667809002075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bef5d657a6cb69965245c2615be216b56d82ab4763232390ed306790434354,PodSandboxId:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721175667689999819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721175655675663031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175653
514923868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5,PodSandboxId:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172117563523
0999243,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175633405344562,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175633392337218,Labels:map[string]string{io.kubernetes.container.name: et
cd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c,PodSandboxId:bd261c9ae650e8f175c47bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175633365293199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff,PodSandboxId:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175633277908414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28addd7e-b9bb-4e8f-91ba-f48a43491a70 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.620043956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e33baa7b-6162-4f0d-bb37-a1aa94e6661c name=/runtime.v1.RuntimeService/Version
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.620136569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e33baa7b-6162-4f0d-bb37-a1aa94e6661c name=/runtime.v1.RuntimeService/Version
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.621420953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ee6eb2b-f982-4346-b96e-808c3ed88912 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.622090936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176079622062438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ee6eb2b-f982-4346-b96e-808c3ed88912 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.622907825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10f4a6fc-8105-4aa1-a9ec-b3c1cd18d01a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.622985223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10f4a6fc-8105-4aa1-a9ec-b3c1cd18d01a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:27:59 ha-565881 crio[679]: time="2024-07-17 00:27:59.623242835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721175803248450444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667828411216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c,PodSandboxId:f467ed059c56cdaaf8de2830ba730e06e558235deeb9422958622f92d7384b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175667809002075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bef5d657a6cb69965245c2615be216b56d82ab4763232390ed306790434354,PodSandboxId:764ba5023d3eee2d36d44948179f7941d3be91025b80a670618eef4c52d68c13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721175667689999819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721175655675663031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175653
514923868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5,PodSandboxId:bc50d045ef7cdfc6e034ee33dca219eca6353dd58f575b46aa62d22e927f6079,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172117563523
0999243,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22442ecb09ab7532c1c9a7afada397a4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175633405344562,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175633392337218,Labels:map[string]string{io.kubernetes.container.name: et
cd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c,PodSandboxId:bd261c9ae650e8f175c47bca295568fcc16c69653c2291cfeac60cbf338961c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175633365293199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuberne
tes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff,PodSandboxId:783f00b872a663d4351199571512126920b7c28ffc22524bad0b17ff314b2eec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175633277908414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10f4a6fc-8105-4aa1-a9ec-b3c1cd18d01a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	28b495a055524       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   e0bd927bf2760       busybox-fc5497c4f-sxdsp
	928ee85bf546b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   f688446a5f59c       coredns-7db6d8ff4d-7wsqq
	cda0c9ceea230       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   f467ed059c56c       coredns-7db6d8ff4d-xftzx
	52bef5d657a6c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   764ba5023d3ee       storage-provisioner
	52b45808cde82       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    7 minutes ago       Running             kindnet-cni               0                   5c5494014c8b1       kindnet-5lrdt
	e572bb9aec2e8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago       Running             kube-proxy                0                   12f43031f4b04       kube-proxy-7p2jl
	14c44e183ef1f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   bc50d045ef7cd       kube-vip-ha-565881
	1ec015ce8f841       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago       Running             kube-scheduler            0                   a6e2148781333       kube-scheduler-ha-565881
	ab8577693652f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   afbb712100717       etcd-ha-565881
	2735221f6ad7f       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago       Running             kube-controller-manager   0                   bd261c9ae650e       kube-controller-manager-ha-565881
	c44889c22020b       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago       Running             kube-apiserver            0                   783f00b872a66       kube-apiserver-ha-565881
	
	
	==> coredns [928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519] <==
	[INFO] 10.244.2.2:44448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000657716s
	[INFO] 10.244.2.2:51292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.019019727s
	[INFO] 10.244.1.2:56403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158179s
	[INFO] 10.244.1.2:35250 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000142805s
	[INFO] 10.244.1.2:40336 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002059439s
	[INFO] 10.244.0.4:37111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137796s
	[INFO] 10.244.0.4:38097 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000091196s
	[INFO] 10.244.0.4:41409 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000489883s
	[INFO] 10.244.0.4:47790 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002006429s
	[INFO] 10.244.2.2:36117 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220718s
	[INFO] 10.244.2.2:57319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118439s
	[INFO] 10.244.1.2:60677 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002037782s
	[INFO] 10.244.0.4:57531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130517s
	[INFO] 10.244.0.4:53255 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001911233s
	[INFO] 10.244.0.4:50878 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001515166s
	[INFO] 10.244.0.4:59609 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005296s
	[INFO] 10.244.0.4:41601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174604s
	[INFO] 10.244.2.2:54282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144175s
	[INFO] 10.244.2.2:33964 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000291713s
	[INFO] 10.244.2.2:38781 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098409s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132708s
	[INFO] 10.244.2.2:42857 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129277s
	[INFO] 10.244.2.2:45518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176537s
	[INFO] 10.244.1.2:38437 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111768s
	[INFO] 10.244.1.2:41860 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000210674s
	
	
	==> coredns [cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c] <==
	[INFO] 10.244.1.2:55200 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103152s
	[INFO] 10.244.1.2:37940 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070078s
	[INFO] 10.244.1.2:48078 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001304627s
	[INFO] 10.244.1.2:45924 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156493s
	[INFO] 10.244.1.2:43327 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095286s
	[INFO] 10.244.1.2:49398 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142472s
	[INFO] 10.244.0.4:55102 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007626s
	[INFO] 10.244.0.4:47068 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069112s
	[INFO] 10.244.0.4:33535 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071265s
	[INFO] 10.244.2.2:46044 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143827s
	[INFO] 10.244.1.2:35109 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129619s
	[INFO] 10.244.1.2:48280 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075012s
	[INFO] 10.244.1.2:56918 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057676s
	[INFO] 10.244.0.4:36784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195227s
	[INFO] 10.244.0.4:42172 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072797s
	[INFO] 10.244.0.4:38471 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054713s
	[INFO] 10.244.0.4:55016 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052519s
	[INFO] 10.244.2.2:35590 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286422s
	[INFO] 10.244.2.2:40026 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000371873s
	[INFO] 10.244.1.2:41980 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000310548s
	[INFO] 10.244.1.2:46445 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000266363s
	[INFO] 10.244.0.4:35492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100381s
	[INFO] 10.244.0.4:42544 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00004087s
	[INFO] 10.244.0.4:35643 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111779s
	[INFO] 10.244.0.4:38933 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000030463s
	
	
	==> describe nodes <==
	Name:               ha-565881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_20_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:20:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:27:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:21:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-565881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6899f2542334306bf4c50f49702dfb5
	  System UUID:                c6899f25-4233-4306-bf4c-50f49702dfb5
	  Boot ID:                    f5b041e8-ae19-4f7a-ac0d-a039fbca796b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sxdsp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 coredns-7db6d8ff4d-7wsqq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m7s
	  kube-system                 coredns-7db6d8ff4d-xftzx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m6s
	  kube-system                 etcd-ha-565881                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m20s
	  kube-system                 kindnet-5lrdt                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m7s
	  kube-system                 kube-apiserver-ha-565881             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-controller-manager-ha-565881    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-proxy-7p2jl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 kube-scheduler-ha-565881             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-vip-ha-565881                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m5s   kube-proxy       
	  Normal  Starting                 7m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m20s  kubelet          Node ha-565881 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s  kubelet          Node ha-565881 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s  kubelet          Node ha-565881 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m8s   node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal  NodeReady                6m52s  kubelet          Node ha-565881 status is now: NodeReady
	  Normal  RegisteredNode           6m1s   node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal  RegisteredNode           4m46s  node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	
	
	Name:               ha-565881-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_21_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:21:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:24:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 00:23:42 +0000   Wed, 17 Jul 2024 00:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-565881-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 002cfcb8afdc450f9dbf024dbe1dd968
	  System UUID:                002cfcb8-afdc-450f-9dbf-024dbe1dd968
	  Boot ID:                    e960dff3-4ffd-424d-9228-f77aa5cf198a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rdpwj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 etcd-ha-565881-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m18s
	  kube-system                 kindnet-k882n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m20s
	  kube-system                 kube-apiserver-ha-565881-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-controller-manager-ha-565881-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-proxy-2f9rj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-scheduler-ha-565881-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-vip-ha-565881-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m20s)  kubelet          Node ha-565881-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m20s)  kubelet          Node ha-565881-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x7 over 6m20s)  kubelet          Node ha-565881-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           6m1s                   node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  NodeNotReady             2m46s                  node-controller  Node ha-565881-m02 status is now: NodeNotReady
	
	
	Name:               ha-565881-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_22_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:22:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:27:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:23:26 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:23:26 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:23:26 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:23:26 +0000   Wed, 17 Jul 2024 00:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-565881-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d0000c1f74247c095cd9247f3f0c350
	  System UUID:                3d0000c1-f742-47c0-95cd-9247f3f0c350
	  Boot ID:                    4fa63eff-e26e-4a4c-8360-5dc73aba6ea0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lmz4q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-565881-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m3s
	  kube-system                 kindnet-ctstx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m5s
	  kube-system                 kube-apiserver-ha-565881-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-controller-manager-ha-565881-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-k5x6x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-ha-565881-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-vip-ha-565881-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-565881-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-565881-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-565881-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal  RegisteredNode           4m47s                node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	
	
	Name:               ha-565881-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_23_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:23:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:27:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:23:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:23:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:23:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:24:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-565881-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008ae63d929d475b8bab60c832202ce9
	  System UUID:                008ae63d-929d-475b-8bab-60c832202ce9
	  Boot ID:                    3540bc22-336a-438e-8b63-852810ced32c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-xz7nj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m2s
	  kube-system                 kube-proxy-p5xml    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m55s                kube-proxy       
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m2s (x2 over 4m2s)  kubelet          Node ha-565881-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x2 over 4m2s)  kubelet          Node ha-565881-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x2 over 4m2s)  kubelet          Node ha-565881-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-565881-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul17 00:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049979] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040150] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.513897] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.375698] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.513665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.825427] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057593] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065677] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.195559] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.109938] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261884] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129275] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.597572] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.062309] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075955] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.082514] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.034910] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 00:21] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.822749] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36] <==
	{"level":"warn","ts":"2024-07-17T00:27:59.905336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.913273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.920421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.927152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.934462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.93888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.942352Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.954158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.960908Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.97703Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.985065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.989538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:27:59.993182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.00824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.012796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.019978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.036984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.042985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.050024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.061983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.068611Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.079759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.093674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.107226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:28:00.116473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:28:00 up 7 min,  0 users,  load average: 0.26, 0.23, 0.12
	Linux ha-565881 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146] <==
	I0717 00:27:26.731628       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:27:36.727540       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:27:36.727651       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:27:36.727905       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:27:36.727947       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:27:36.728079       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:27:36.728121       1 main.go:303] handling current node
	I0717 00:27:36.728152       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:27:36.728170       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:27:46.731085       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:27:46.731154       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:27:46.731314       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:27:46.731340       1 main.go:303] handling current node
	I0717 00:27:46.731353       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:27:46.731358       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:27:46.731436       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:27:46.731462       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:27:56.723206       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:27:56.723266       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:27:56.723419       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:27:56.723453       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:27:56.723506       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:27:56.723512       1 main.go:303] handling current node
	I0717 00:27:56.723542       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:27:56.723545       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff] <==
	I0717 00:20:38.428497       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:20:38.441536       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238]
	I0717 00:20:38.442661       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:20:38.448071       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:20:38.632419       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 00:20:39.609562       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:20:39.639817       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:20:39.666647       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:20:52.735821       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 00:20:52.834990       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 00:23:25.816126       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37258: use of closed network connection
	E0717 00:23:26.006470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37288: use of closed network connection
	E0717 00:23:26.398340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37324: use of closed network connection
	E0717 00:23:26.575363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37338: use of closed network connection
	E0717 00:23:26.756657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37350: use of closed network connection
	E0717 00:23:26.948378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37364: use of closed network connection
	E0717 00:23:27.143312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37386: use of closed network connection
	E0717 00:23:27.325862       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37414: use of closed network connection
	E0717 00:23:27.613922       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37442: use of closed network connection
	E0717 00:23:27.795829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37464: use of closed network connection
	E0717 00:23:27.979849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37486: use of closed network connection
	E0717 00:23:28.145679       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37498: use of closed network connection
	E0717 00:23:28.327975       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37520: use of closed network connection
	E0717 00:23:28.507457       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37544: use of closed network connection
	W0717 00:24:58.447330       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.97]
	
	
	==> kube-controller-manager [2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c] <==
	I0717 00:23:21.940022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="201.725283ms"
	I0717 00:23:22.153168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="212.381321ms"
	I0717 00:23:22.197631       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.425921ms"
	I0717 00:23:22.215343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.665283ms"
	I0717 00:23:22.216352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.69µs"
	I0717 00:23:22.323817       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="317.447µs"
	I0717 00:23:23.345654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.841373ms"
	I0717 00:23:23.345858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.507µs"
	I0717 00:23:23.372371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.161µs"
	I0717 00:23:23.372774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.276µs"
	I0717 00:23:23.391087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.196µs"
	I0717 00:23:23.402857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.302µs"
	I0717 00:23:23.406873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.996µs"
	I0717 00:23:23.424098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.913µs"
	I0717 00:23:24.299215       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.816356ms"
	I0717 00:23:24.299638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.466µs"
	I0717 00:23:25.347987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.734712ms"
	I0717 00:23:25.348235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.862µs"
	I0717 00:23:58.654038       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-565881-m04\" does not exist"
	I0717 00:23:58.795421       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565881-m04" podCIDRs=["10.244.3.0/24"]
	I0717 00:24:01.920061       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565881-m04"
	I0717 00:24:17.499789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565881-m04"
	I0717 00:25:13.761655       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565881-m04"
	I0717 00:25:13.977677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.715247ms"
	I0717 00:25:13.979128       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="435.771µs"
	
	
	==> kube-proxy [e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f] <==
	I0717 00:20:53.864763       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:20:53.887642       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	I0717 00:20:53.970646       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:20:53.970727       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:20:53.970745       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:20:53.973519       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:20:53.973945       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:20:53.973980       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:20:53.975966       1 config.go:192] "Starting service config controller"
	I0717 00:20:53.977488       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:20:53.977564       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:20:53.977586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:20:53.978775       1 config.go:319] "Starting node config controller"
	I0717 00:20:53.978827       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:20:54.078104       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:20:54.079286       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:20:54.081344       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6] <==
	W0717 00:20:37.874563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:20:37.874615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:20:38.003436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:20:38.003486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:20:38.057021       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:20:38.057073       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:20:40.978638       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 00:22:55.359047       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bmbng\": pod kube-proxy-bmbng is already assigned to node \"ha-565881-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bmbng" node="ha-565881-m03"
	E0717 00:22:55.359248       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3e8023d6-ad43-4db7-a250-b93a258d64d4(kube-system/kube-proxy-bmbng) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bmbng"
	E0717 00:22:55.359272       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bmbng\": pod kube-proxy-bmbng is already assigned to node \"ha-565881-m03\"" pod="kube-system/kube-proxy-bmbng"
	I0717 00:22:55.359312       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bmbng" node="ha-565881-m03"
	E0717 00:23:21.624179       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-n7vc5\": pod busybox-fc5497c4f-n7vc5 is already assigned to node \"ha-565881-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-n7vc5" node="ha-565881-m02"
	E0717 00:23:21.624350       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5b78d075-375f-4f69-8471-5d953de0d009(default/busybox-fc5497c4f-n7vc5) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-n7vc5"
	E0717 00:23:21.624402       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-n7vc5\": pod busybox-fc5497c4f-n7vc5 is already assigned to node \"ha-565881-m02\"" pod="default/busybox-fc5497c4f-n7vc5"
	I0717 00:23:21.624441       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-n7vc5" node="ha-565881-m02"
	E0717 00:23:58.823462       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xz7nj\": pod kindnet-xz7nj is already assigned to node \"ha-565881-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xz7nj" node="ha-565881-m04"
	E0717 00:23:58.823582       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xz7nj\": pod kindnet-xz7nj is already assigned to node \"ha-565881-m04\"" pod="kube-system/kindnet-xz7nj"
	E0717 00:23:58.897275       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-njsjv\": pod kube-proxy-njsjv is already assigned to node \"ha-565881-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-njsjv" node="ha-565881-m04"
	E0717 00:23:58.897422       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0521a710-eba8-4a60-89ab-3d97d26fa540(kube-system/kube-proxy-njsjv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-njsjv"
	E0717 00:23:58.897446       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-njsjv\": pod kube-proxy-njsjv is already assigned to node \"ha-565881-m04\"" pod="kube-system/kube-proxy-njsjv"
	I0717 00:23:58.897468       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-njsjv" node="ha-565881-m04"
	E0717 00:23:58.899913       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r6sqd\": pod kindnet-r6sqd is already assigned to node \"ha-565881-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r6sqd" node="ha-565881-m04"
	E0717 00:23:58.900001       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c458e5d6-fe79-40d8-bdea-1bd3aade37d2(kube-system/kindnet-r6sqd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-r6sqd"
	E0717 00:23:58.900023       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r6sqd\": pod kindnet-r6sqd is already assigned to node \"ha-565881-m04\"" pod="kube-system/kindnet-r6sqd"
	I0717 00:23:58.900048       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r6sqd" node="ha-565881-m04"
	
	
	==> kubelet <==
	Jul 17 00:23:39 ha-565881 kubelet[1370]: E0717 00:23:39.574530    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:23:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:23:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:23:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:23:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:24:39 ha-565881 kubelet[1370]: E0717 00:24:39.572945    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:24:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:24:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:24:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:24:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:25:39 ha-565881 kubelet[1370]: E0717 00:25:39.587771    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:25:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:25:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:25:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:25:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:26:39 ha-565881 kubelet[1370]: E0717 00:26:39.582816    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:26:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:26:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:26:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:26:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:27:39 ha-565881 kubelet[1370]: E0717 00:27:39.573768    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:27:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:27:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:27:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:27:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565881 -n ha-565881
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (798.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-565881 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-565881 -v=7 --alsologtostderr
E0717 00:29:18.738835   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:29:46.423791   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-565881 -v=7 --alsologtostderr: exit status 82 (2m1.932896468s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565881-m04"  ...
	* Stopping node "ha-565881-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:28:01.535805   36641 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:28:01.535896   36641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:28:01.535904   36641 out.go:304] Setting ErrFile to fd 2...
	I0717 00:28:01.535908   36641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:28:01.536390   36641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:28:01.536757   36641 out.go:298] Setting JSON to false
	I0717 00:28:01.536868   36641 mustload.go:65] Loading cluster: ha-565881
	I0717 00:28:01.537448   36641 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:28:01.537566   36641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:28:01.537759   36641 mustload.go:65] Loading cluster: ha-565881
	I0717 00:28:01.537905   36641 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:28:01.537928   36641 stop.go:39] StopHost: ha-565881-m04
	I0717 00:28:01.538254   36641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:28:01.538294   36641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:28:01.552935   36641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41787
	I0717 00:28:01.553388   36641 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:28:01.553977   36641 main.go:141] libmachine: Using API Version  1
	I0717 00:28:01.553996   36641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:28:01.554325   36641 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:28:01.556839   36641 out.go:177] * Stopping node "ha-565881-m04"  ...
	I0717 00:28:01.558221   36641 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:28:01.558248   36641 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:28:01.558498   36641 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:28:01.558525   36641 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:28:01.561602   36641 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:28:01.562058   36641 main.go:141] libmachine: (ha-565881-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:6e:49", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:23:43 +0000 UTC Type:0 Mac:52:54:00:f0:6e:49 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-565881-m04 Clientid:01:52:54:00:f0:6e:49}
	I0717 00:28:01.562084   36641 main.go:141] libmachine: (ha-565881-m04) DBG | domain ha-565881-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:f0:6e:49 in network mk-ha-565881
	I0717 00:28:01.562229   36641 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHPort
	I0717 00:28:01.562403   36641 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHKeyPath
	I0717 00:28:01.562527   36641 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHUsername
	I0717 00:28:01.562659   36641 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m04/id_rsa Username:docker}
	I0717 00:28:01.648028   36641 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 00:28:01.701885   36641 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 00:28:01.755949   36641 main.go:141] libmachine: Stopping "ha-565881-m04"...
	I0717 00:28:01.756003   36641 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:28:01.757517   36641 main.go:141] libmachine: (ha-565881-m04) Calling .Stop
	I0717 00:28:01.761073   36641 main.go:141] libmachine: (ha-565881-m04) Waiting for machine to stop 0/120
	I0717 00:28:03.002186   36641 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:28:03.003526   36641 main.go:141] libmachine: Machine "ha-565881-m04" was stopped.
	I0717 00:28:03.003543   36641 stop.go:75] duration metric: took 1.445325818s to stop
	I0717 00:28:03.003561   36641 stop.go:39] StopHost: ha-565881-m03
	I0717 00:28:03.003869   36641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:28:03.003906   36641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:28:03.018494   36641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0717 00:28:03.018964   36641 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:28:03.019460   36641 main.go:141] libmachine: Using API Version  1
	I0717 00:28:03.019483   36641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:28:03.019805   36641 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:28:03.022022   36641 out.go:177] * Stopping node "ha-565881-m03"  ...
	I0717 00:28:03.023400   36641 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:28:03.023424   36641 main.go:141] libmachine: (ha-565881-m03) Calling .DriverName
	I0717 00:28:03.023638   36641 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:28:03.023657   36641 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHHostname
	I0717 00:28:03.026608   36641 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:28:03.027095   36641 main.go:141] libmachine: (ha-565881-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:60:7e", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:22:17 +0000 UTC Type:0 Mac:52:54:00:43:60:7e Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-565881-m03 Clientid:01:52:54:00:43:60:7e}
	I0717 00:28:03.027133   36641 main.go:141] libmachine: (ha-565881-m03) DBG | domain ha-565881-m03 has defined IP address 192.168.39.97 and MAC address 52:54:00:43:60:7e in network mk-ha-565881
	I0717 00:28:03.027274   36641 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHPort
	I0717 00:28:03.027422   36641 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHKeyPath
	I0717 00:28:03.027554   36641 main.go:141] libmachine: (ha-565881-m03) Calling .GetSSHUsername
	I0717 00:28:03.027693   36641 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m03/id_rsa Username:docker}
	I0717 00:28:03.116207   36641 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 00:28:03.172393   36641 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 00:28:03.228200   36641 main.go:141] libmachine: Stopping "ha-565881-m03"...
	I0717 00:28:03.228224   36641 main.go:141] libmachine: (ha-565881-m03) Calling .GetState
	I0717 00:28:03.229833   36641 main.go:141] libmachine: (ha-565881-m03) Calling .Stop
	I0717 00:28:03.233063   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 0/120
	I0717 00:28:04.234997   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 1/120
	I0717 00:28:05.236339   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 2/120
	I0717 00:28:06.237735   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 3/120
	I0717 00:28:07.239066   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 4/120
	I0717 00:28:08.240376   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 5/120
	I0717 00:28:09.242210   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 6/120
	I0717 00:28:10.243798   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 7/120
	I0717 00:28:11.245425   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 8/120
	I0717 00:28:12.246861   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 9/120
	I0717 00:28:13.248933   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 10/120
	I0717 00:28:14.251057   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 11/120
	I0717 00:28:15.252704   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 12/120
	I0717 00:28:16.254317   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 13/120
	I0717 00:28:17.256008   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 14/120
	I0717 00:28:18.258009   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 15/120
	I0717 00:28:19.259859   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 16/120
	I0717 00:28:20.261394   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 17/120
	I0717 00:28:21.262729   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 18/120
	I0717 00:28:22.264286   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 19/120
	I0717 00:28:23.266169   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 20/120
	I0717 00:28:24.267644   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 21/120
	I0717 00:28:25.269216   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 22/120
	I0717 00:28:26.270582   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 23/120
	I0717 00:28:27.272139   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 24/120
	I0717 00:28:28.273754   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 25/120
	I0717 00:28:29.274995   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 26/120
	I0717 00:28:30.276372   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 27/120
	I0717 00:28:31.277528   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 28/120
	I0717 00:28:32.279257   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 29/120
	I0717 00:28:33.281039   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 30/120
	I0717 00:28:34.282783   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 31/120
	I0717 00:28:35.284323   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 32/120
	I0717 00:28:36.285691   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 33/120
	I0717 00:28:37.287127   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 34/120
	I0717 00:28:38.288887   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 35/120
	I0717 00:28:39.290878   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 36/120
	I0717 00:28:40.292498   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 37/120
	I0717 00:28:41.294447   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 38/120
	I0717 00:28:42.295871   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 39/120
	I0717 00:28:43.297281   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 40/120
	I0717 00:28:44.298673   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 41/120
	I0717 00:28:45.300550   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 42/120
	I0717 00:28:46.301948   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 43/120
	I0717 00:28:47.303471   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 44/120
	I0717 00:28:48.305237   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 45/120
	I0717 00:28:49.307198   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 46/120
	I0717 00:28:50.308486   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 47/120
	I0717 00:28:51.309969   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 48/120
	I0717 00:28:52.311303   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 49/120
	I0717 00:28:53.312640   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 50/120
	I0717 00:28:54.314039   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 51/120
	I0717 00:28:55.315391   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 52/120
	I0717 00:28:56.317339   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 53/120
	I0717 00:28:57.319318   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 54/120
	I0717 00:28:58.321065   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 55/120
	I0717 00:28:59.322543   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 56/120
	I0717 00:29:00.323850   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 57/120
	I0717 00:29:01.325430   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 58/120
	I0717 00:29:02.326979   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 59/120
	I0717 00:29:03.328840   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 60/120
	I0717 00:29:04.330050   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 61/120
	I0717 00:29:05.331503   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 62/120
	I0717 00:29:06.332939   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 63/120
	I0717 00:29:07.334094   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 64/120
	I0717 00:29:08.335656   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 65/120
	I0717 00:29:09.337208   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 66/120
	I0717 00:29:10.338442   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 67/120
	I0717 00:29:11.339738   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 68/120
	I0717 00:29:12.341265   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 69/120
	I0717 00:29:13.342974   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 70/120
	I0717 00:29:14.344317   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 71/120
	I0717 00:29:15.345717   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 72/120
	I0717 00:29:16.346976   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 73/120
	I0717 00:29:17.348350   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 74/120
	I0717 00:29:18.350171   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 75/120
	I0717 00:29:19.351392   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 76/120
	I0717 00:29:20.352958   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 77/120
	I0717 00:29:21.355045   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 78/120
	I0717 00:29:22.356472   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 79/120
	I0717 00:29:23.358347   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 80/120
	I0717 00:29:24.359811   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 81/120
	I0717 00:29:25.361272   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 82/120
	I0717 00:29:26.362464   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 83/120
	I0717 00:29:27.363978   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 84/120
	I0717 00:29:28.365825   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 85/120
	I0717 00:29:29.367226   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 86/120
	I0717 00:29:30.368601   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 87/120
	I0717 00:29:31.369889   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 88/120
	I0717 00:29:32.371176   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 89/120
	I0717 00:29:33.373073   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 90/120
	I0717 00:29:34.374581   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 91/120
	I0717 00:29:35.375989   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 92/120
	I0717 00:29:36.377398   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 93/120
	I0717 00:29:37.378783   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 94/120
	I0717 00:29:38.380882   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 95/120
	I0717 00:29:39.383214   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 96/120
	I0717 00:29:40.384898   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 97/120
	I0717 00:29:41.387021   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 98/120
	I0717 00:29:42.388434   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 99/120
	I0717 00:29:43.391118   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 100/120
	I0717 00:29:44.393614   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 101/120
	I0717 00:29:45.394904   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 102/120
	I0717 00:29:46.396450   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 103/120
	I0717 00:29:47.397763   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 104/120
	I0717 00:29:48.399337   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 105/120
	I0717 00:29:49.401643   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 106/120
	I0717 00:29:50.402998   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 107/120
	I0717 00:29:51.404512   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 108/120
	I0717 00:29:52.405860   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 109/120
	I0717 00:29:53.407646   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 110/120
	I0717 00:29:54.409148   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 111/120
	I0717 00:29:55.411057   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 112/120
	I0717 00:29:56.412359   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 113/120
	I0717 00:29:57.413877   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 114/120
	I0717 00:29:58.415745   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 115/120
	I0717 00:29:59.417080   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 116/120
	I0717 00:30:00.419192   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 117/120
	I0717 00:30:01.420332   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 118/120
	I0717 00:30:02.421621   36641 main.go:141] libmachine: (ha-565881-m03) Waiting for machine to stop 119/120
	I0717 00:30:03.422270   36641 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 00:30:03.422310   36641 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 00:30:03.424194   36641 out.go:177] 
	W0717 00:30:03.425564   36641 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 00:30:03.425574   36641 out.go:239] * 
	* 
	W0717 00:30:03.428307   36641 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 00:30:03.430493   36641 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-565881 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565881 --wait=true -v=7 --alsologtostderr
E0717 00:32:12.451040   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:33:35.497312   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:34:18.738606   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:37:12.451630   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:39:18.738708   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:40:41.784190   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-565881 --wait=true -v=7 --alsologtostderr: exit status 80 (11m13.606993764s)

                                                
                                                
-- stdout --
	* [ha-565881] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-565881" primary control-plane node in "ha-565881" cluster
	* Updating the running kvm2 "ha-565881" VM ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-565881-m02" control-plane node in "ha-565881" cluster
	* Restarting existing kvm2 VM for "ha-565881-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.238
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.238
	* Verifying Kubernetes components...
	
	* Starting "ha-565881-m03" control-plane node in "ha-565881" cluster
	* Restarting existing kvm2 VM for "ha-565881-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.238,192.168.39.14
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.238
	  - env NO_PROXY=192.168.39.238,192.168.39.14
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:30:03.472958   37091 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:30:03.473178   37091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:30:03.473186   37091 out.go:304] Setting ErrFile to fd 2...
	I0717 00:30:03.473190   37091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:30:03.473344   37091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:30:03.473853   37091 out.go:298] Setting JSON to false
	I0717 00:30:03.474716   37091 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4352,"bootTime":1721171851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:30:03.474771   37091 start.go:139] virtualization: kvm guest
	I0717 00:30:03.477060   37091 out.go:177] * [ha-565881] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:30:03.478329   37091 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:30:03.478403   37091 notify.go:220] Checking for updates...
	I0717 00:30:03.480995   37091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:30:03.482344   37091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:30:03.483547   37091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:30:03.484814   37091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:30:03.485998   37091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:30:03.487571   37091 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:30:03.487666   37091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:30:03.488110   37091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:30:03.488183   37091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:30:03.502769   37091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0717 00:30:03.503194   37091 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:30:03.503743   37091 main.go:141] libmachine: Using API Version  1
	I0717 00:30:03.503765   37091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:30:03.504103   37091 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:30:03.504301   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.541510   37091 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:30:03.542844   37091 start.go:297] selected driver: kvm2
	I0717 00:30:03.542856   37091 start.go:901] validating driver "kvm2" against &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:30:03.543000   37091 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:30:03.543351   37091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:30:03.543431   37091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:30:03.558318   37091 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:30:03.559016   37091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:30:03.559046   37091 cni.go:84] Creating CNI manager for ""
	I0717 00:30:03.559054   37091 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:30:03.559112   37091 start.go:340] cluster config:
	{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:30:03.559252   37091 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:30:03.561017   37091 out.go:177] * Starting "ha-565881" primary control-plane node in "ha-565881" cluster
	I0717 00:30:03.562183   37091 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:30:03.562210   37091 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:30:03.562219   37091 cache.go:56] Caching tarball of preloaded images
	I0717 00:30:03.562282   37091 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:30:03.562291   37091 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:30:03.562398   37091 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:30:03.562605   37091 start.go:360] acquireMachinesLock for ha-565881: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:30:03.562643   37091 start.go:364] duration metric: took 22.287µs to acquireMachinesLock for "ha-565881"
	I0717 00:30:03.562657   37091 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:30:03.562665   37091 fix.go:54] fixHost starting: 
	I0717 00:30:03.562913   37091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:30:03.562942   37091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:30:03.577346   37091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0717 00:30:03.577771   37091 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:30:03.578283   37091 main.go:141] libmachine: Using API Version  1
	I0717 00:30:03.578307   37091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:30:03.578612   37091 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:30:03.578778   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.578956   37091 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:30:03.580457   37091 fix.go:112] recreateIfNeeded on ha-565881: state=Running err=<nil>
	W0717 00:30:03.580473   37091 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:30:03.583293   37091 out.go:177] * Updating the running kvm2 "ha-565881" VM ...
	I0717 00:30:03.584488   37091 machine.go:94] provisionDockerMachine start ...
	I0717 00:30:03.584508   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.584718   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.586840   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.587288   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.587320   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.587446   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.587598   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.587745   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.587877   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.588058   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.588246   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.588256   37091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:30:03.705684   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:30:03.705712   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.705945   37091 buildroot.go:166] provisioning hostname "ha-565881"
	I0717 00:30:03.705986   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.706223   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.708858   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.709223   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.709249   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.709419   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.709680   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.709842   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.709989   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.710164   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.710330   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.710374   37091 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881 && echo "ha-565881" | sudo tee /etc/hostname
	I0717 00:30:03.843470   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:30:03.843498   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.846412   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.846780   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.846804   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.847036   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.847216   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.847358   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.847507   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.847645   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.847802   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.847816   37091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:30:03.965266   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:30:03.965298   37091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:30:03.965331   37091 buildroot.go:174] setting up certificates
	I0717 00:30:03.965342   37091 provision.go:84] configureAuth start
	I0717 00:30:03.965358   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.965599   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:30:03.968261   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.968685   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.968720   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.968867   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.971217   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.971529   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.971549   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.971639   37091 provision.go:143] copyHostCerts
	I0717 00:30:03.971663   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:30:03.971726   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:30:03.971745   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:30:03.971812   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:30:03.971911   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:30:03.971939   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:30:03.971948   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:30:03.972001   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:30:03.972058   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:30:03.972075   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:30:03.972081   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:30:03.972106   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:30:03.972159   37091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881 san=[127.0.0.1 192.168.39.238 ha-565881 localhost minikube]
	I0717 00:30:04.115427   37091 provision.go:177] copyRemoteCerts
	I0717 00:30:04.115482   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:30:04.115503   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:04.118744   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.119317   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:04.119347   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.119555   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:04.119745   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.119928   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:04.120090   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:30:04.208734   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:30:04.208802   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:30:04.237408   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:30:04.237489   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:30:04.264010   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:30:04.264070   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:30:04.287879   37091 provision.go:87] duration metric: took 322.51954ms to configureAuth
	I0717 00:30:04.287910   37091 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:30:04.288184   37091 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:30:04.288255   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:04.290649   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.291089   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:04.291116   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.291289   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:04.291470   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.291640   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.291741   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:04.291873   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:04.292044   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:04.292058   37091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:31:35.247731   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:31:35.247757   37091 machine.go:97] duration metric: took 1m31.66325606s to provisionDockerMachine
	I0717 00:31:35.247768   37091 start.go:293] postStartSetup for "ha-565881" (driver="kvm2")
	I0717 00:31:35.247799   37091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:31:35.247824   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.248178   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:31:35.248207   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.251173   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.251605   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.251648   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.251775   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.251956   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.252113   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.252239   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.341073   37091 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:31:35.345318   37091 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:31:35.345349   37091 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:31:35.345409   37091 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:31:35.345487   37091 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:31:35.345496   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:31:35.345577   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:31:35.355014   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:31:35.378321   37091 start.go:296] duration metric: took 130.540009ms for postStartSetup
	I0717 00:31:35.378364   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.378645   37091 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 00:31:35.378668   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.381407   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.381759   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.381777   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.381950   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.382135   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.382269   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.382390   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	W0717 00:31:35.467602   37091 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 00:31:35.467627   37091 fix.go:56] duration metric: took 1m31.904962355s for fixHost
	I0717 00:31:35.467654   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.470742   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.471061   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.471092   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.471293   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.471500   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.471682   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.471811   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.471998   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:31:35.472184   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:31:35.472199   37091 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 00:31:35.585646   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721176295.539868987
	
	I0717 00:31:35.585669   37091 fix.go:216] guest clock: 1721176295.539868987
	I0717 00:31:35.585675   37091 fix.go:229] Guest: 2024-07-17 00:31:35.539868987 +0000 UTC Remote: 2024-07-17 00:31:35.467636929 +0000 UTC m=+92.028103333 (delta=72.232058ms)
	I0717 00:31:35.585712   37091 fix.go:200] guest clock delta is within tolerance: 72.232058ms
	I0717 00:31:35.585718   37091 start.go:83] releasing machines lock for "ha-565881", held for 1m32.023065415s
	I0717 00:31:35.585737   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.585998   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:31:35.588681   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.589073   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.589105   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.589223   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589658   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589816   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589949   37091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:31:35.590001   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.590075   37091 ssh_runner.go:195] Run: cat /version.json
	I0717 00:31:35.590101   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.592529   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.592811   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.592884   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.592925   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.593058   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.593206   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.593215   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.593229   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.593401   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.593410   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.593555   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.593554   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.593674   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.593812   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.674134   37091 ssh_runner.go:195] Run: systemctl --version
	I0717 00:31:35.702524   37091 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:31:35.860996   37091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:31:35.869782   37091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:31:35.869845   37091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:31:35.878978   37091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 00:31:35.879007   37091 start.go:495] detecting cgroup driver to use...
	I0717 00:31:35.879098   37091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:31:35.895504   37091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:31:35.909937   37091 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:31:35.909986   37091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:31:35.923661   37091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:31:35.937352   37091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:31:36.114537   37091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:31:36.337616   37091 docker.go:233] disabling docker service ...
	I0717 00:31:36.337696   37091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:31:36.368404   37091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:31:36.382665   37091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:31:36.542136   37091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:31:36.694879   37091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:31:36.710588   37091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:31:36.730775   37091 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:31:36.730835   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.742887   37091 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:31:36.742962   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.753720   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.764188   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.774456   37091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:31:36.785055   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.795722   37091 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.806771   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.817066   37091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:31:36.826812   37091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:31:36.836656   37091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:31:36.977073   37091 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:31:46.703564   37091 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.72645615s)
	I0717 00:31:46.703601   37091 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:31:46.703656   37091 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:31:46.708592   37091 start.go:563] Will wait 60s for crictl version
	I0717 00:31:46.708643   37091 ssh_runner.go:195] Run: which crictl
	I0717 00:31:46.712405   37091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:31:46.748919   37091 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:31:46.748989   37091 ssh_runner.go:195] Run: crio --version
	I0717 00:31:46.776791   37091 ssh_runner.go:195] Run: crio --version
	I0717 00:31:46.805919   37091 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:31:46.807247   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:31:46.809680   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:46.810066   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:46.810105   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:46.810335   37091 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:31:46.814801   37091 kubeadm.go:883] updating cluster {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:31:46.814920   37091 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:31:46.814962   37091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:31:46.864570   37091 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:31:46.864592   37091 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:31:46.864662   37091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:31:46.898334   37091 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:31:46.898361   37091 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:31:46.898374   37091 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.2 crio true true} ...
	I0717 00:31:46.898496   37091 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:31:46.898622   37091 ssh_runner.go:195] Run: crio config
	I0717 00:31:46.950419   37091 cni.go:84] Creating CNI manager for ""
	I0717 00:31:46.950449   37091 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:31:46.950466   37091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:31:46.950490   37091 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565881 NodeName:ha-565881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:31:46.950650   37091 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565881"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:31:46.950675   37091 kube-vip.go:115] generating kube-vip config ...
	I0717 00:31:46.950731   37091 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:31:46.962599   37091 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:31:46.962724   37091 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:31:46.962776   37091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:31:46.972441   37091 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:31:46.972515   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:31:46.981722   37091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 00:31:46.998862   37091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:31:47.016994   37091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 00:31:47.040256   37091 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:31:47.056667   37091 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:31:47.061956   37091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:31:47.205261   37091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:31:47.220035   37091 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.238
	I0717 00:31:47.220059   37091 certs.go:194] generating shared ca certs ...
	I0717 00:31:47.220074   37091 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.220232   37091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:31:47.220289   37091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:31:47.220306   37091 certs.go:256] generating profile certs ...
	I0717 00:31:47.220405   37091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:31:47.220439   37091 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d
	I0717 00:31:47.220463   37091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.14 192.168.39.97 192.168.39.254]
	I0717 00:31:47.358180   37091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d ...
	I0717 00:31:47.358210   37091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d: {Name:mkbe0bb2172102aa8c7ea4b23ce0c7fe570174cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.358402   37091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d ...
	I0717 00:31:47.358423   37091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d: {Name:mkbcb38a702d9304a89a7717b83e8333c6851c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.358518   37091 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:31:47.358723   37091 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:31:47.358880   37091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:31:47.358905   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:31:47.358923   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:31:47.358947   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:31:47.358964   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:31:47.358980   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:31:47.358996   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:31:47.359014   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:31:47.359031   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:31:47.359093   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:31:47.359132   37091 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:31:47.359146   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:31:47.359174   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:31:47.359203   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:31:47.359237   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:31:47.359289   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:31:47.359329   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.359349   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.359367   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.359929   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:31:47.386164   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:31:47.410527   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:31:47.434465   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:31:47.456999   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 00:31:47.480811   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:31:47.503411   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:31:47.526710   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:31:47.549885   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:31:47.573543   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:31:47.598119   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:31:47.621760   37091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:31:47.638631   37091 ssh_runner.go:195] Run: openssl version
	I0717 00:31:47.645238   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:31:47.655857   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.660235   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.660292   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.665757   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:31:47.674979   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:31:47.685757   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.689981   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.690028   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.695412   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:31:47.704384   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:31:47.714711   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.718924   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.718961   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.724398   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:31:47.733669   37091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:31:47.737932   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 00:31:47.743392   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 00:31:47.748664   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 00:31:47.753938   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 00:31:47.759225   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 00:31:47.764447   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 00:31:47.769709   37091 kubeadm.go:392] StartCluster: {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:31:47.769816   37091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:31:47.769867   37091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:31:47.806048   37091 cri.go:89] found id: "05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0"
	I0717 00:31:47.806070   37091 cri.go:89] found id: "42119e9324f11f4297cf4f2052d5440773e17236489ca34e1988564acce85cc1"
	I0717 00:31:47.806075   37091 cri.go:89] found id: "8b3db903a1f836c172e85c6e6229a0500c4729281c2733ba22e09d38ec08964b"
	I0717 00:31:47.806079   37091 cri.go:89] found id: "404747229eea4d41bdc771562fc8b910464a0694c31f9ae117eeaec79057382d"
	I0717 00:31:47.806083   37091 cri.go:89] found id: "dcda7fe2ea87d9d0412fd424de512c60b84b972996e99cbd410f5a517bb7bf6a"
	I0717 00:31:47.806087   37091 cri.go:89] found id: "928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519"
	I0717 00:31:47.806091   37091 cri.go:89] found id: "cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c"
	I0717 00:31:47.806095   37091 cri.go:89] found id: "52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146"
	I0717 00:31:47.806099   37091 cri.go:89] found id: "e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f"
	I0717 00:31:47.806106   37091 cri.go:89] found id: "14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5"
	I0717 00:31:47.806111   37091 cri.go:89] found id: "1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6"
	I0717 00:31:47.806115   37091 cri.go:89] found id: "ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36"
	I0717 00:31:47.806120   37091 cri.go:89] found id: "2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c"
	I0717 00:31:47.806127   37091 cri.go:89] found id: "c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff"
	I0717 00:31:47.806132   37091 cri.go:89] found id: ""
	I0717 00:31:47.806177   37091 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-565881 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-565881
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565881 -n ha-565881
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565881 logs -n 25: (1.7628399s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m02:/home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m04 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp testdata/cp-test.txt                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881:/home/docker/cp-test_ha-565881-m04_ha-565881.txt                      |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881 sudo cat                                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881.txt                                |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m02:/home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03:/home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m03 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-565881 node stop m02 -v=7                                                    | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-565881 node start m02 -v=7                                                   | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:27 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-565881 -v=7                                                          | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-565881 -v=7                                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-565881 --wait=true -v=7                                                   | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-565881                                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:30:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:30:03.472958   37091 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:30:03.473178   37091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:30:03.473186   37091 out.go:304] Setting ErrFile to fd 2...
	I0717 00:30:03.473190   37091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:30:03.473344   37091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:30:03.473853   37091 out.go:298] Setting JSON to false
	I0717 00:30:03.474716   37091 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4352,"bootTime":1721171851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:30:03.474771   37091 start.go:139] virtualization: kvm guest
	I0717 00:30:03.477060   37091 out.go:177] * [ha-565881] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:30:03.478329   37091 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:30:03.478403   37091 notify.go:220] Checking for updates...
	I0717 00:30:03.480995   37091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:30:03.482344   37091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:30:03.483547   37091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:30:03.484814   37091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:30:03.485998   37091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:30:03.487571   37091 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:30:03.487666   37091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:30:03.488110   37091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:30:03.488183   37091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:30:03.502769   37091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0717 00:30:03.503194   37091 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:30:03.503743   37091 main.go:141] libmachine: Using API Version  1
	I0717 00:30:03.503765   37091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:30:03.504103   37091 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:30:03.504301   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.541510   37091 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:30:03.542844   37091 start.go:297] selected driver: kvm2
	I0717 00:30:03.542856   37091 start.go:901] validating driver "kvm2" against &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:30:03.543000   37091 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:30:03.543351   37091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:30:03.543431   37091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:30:03.558318   37091 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:30:03.559016   37091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:30:03.559046   37091 cni.go:84] Creating CNI manager for ""
	I0717 00:30:03.559054   37091 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:30:03.559112   37091 start.go:340] cluster config:
	{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:30:03.559252   37091 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:30:03.561017   37091 out.go:177] * Starting "ha-565881" primary control-plane node in "ha-565881" cluster
	I0717 00:30:03.562183   37091 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:30:03.562210   37091 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:30:03.562219   37091 cache.go:56] Caching tarball of preloaded images
	I0717 00:30:03.562282   37091 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:30:03.562291   37091 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:30:03.562398   37091 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:30:03.562605   37091 start.go:360] acquireMachinesLock for ha-565881: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:30:03.562643   37091 start.go:364] duration metric: took 22.287µs to acquireMachinesLock for "ha-565881"
	I0717 00:30:03.562657   37091 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:30:03.562665   37091 fix.go:54] fixHost starting: 
	I0717 00:30:03.562913   37091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:30:03.562942   37091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:30:03.577346   37091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0717 00:30:03.577771   37091 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:30:03.578283   37091 main.go:141] libmachine: Using API Version  1
	I0717 00:30:03.578307   37091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:30:03.578612   37091 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:30:03.578778   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.578956   37091 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:30:03.580457   37091 fix.go:112] recreateIfNeeded on ha-565881: state=Running err=<nil>
	W0717 00:30:03.580473   37091 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:30:03.583293   37091 out.go:177] * Updating the running kvm2 "ha-565881" VM ...
	I0717 00:30:03.584488   37091 machine.go:94] provisionDockerMachine start ...
	I0717 00:30:03.584508   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.584718   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.586840   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.587288   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.587320   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.587446   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.587598   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.587745   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.587877   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.588058   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.588246   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.588256   37091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:30:03.705684   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:30:03.705712   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.705945   37091 buildroot.go:166] provisioning hostname "ha-565881"
	I0717 00:30:03.705986   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.706223   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.708858   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.709223   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.709249   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.709419   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.709680   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.709842   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.709989   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.710164   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.710330   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.710374   37091 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881 && echo "ha-565881" | sudo tee /etc/hostname
	I0717 00:30:03.843470   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:30:03.843498   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.846412   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.846780   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.846804   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.847036   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.847216   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.847358   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.847507   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.847645   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.847802   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.847816   37091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:30:03.965266   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:30:03.965298   37091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:30:03.965331   37091 buildroot.go:174] setting up certificates
	I0717 00:30:03.965342   37091 provision.go:84] configureAuth start
	I0717 00:30:03.965358   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.965599   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:30:03.968261   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.968685   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.968720   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.968867   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.971217   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.971529   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.971549   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.971639   37091 provision.go:143] copyHostCerts
	I0717 00:30:03.971663   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:30:03.971726   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:30:03.971745   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:30:03.971812   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:30:03.971911   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:30:03.971939   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:30:03.971948   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:30:03.972001   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:30:03.972058   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:30:03.972075   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:30:03.972081   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:30:03.972106   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:30:03.972159   37091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881 san=[127.0.0.1 192.168.39.238 ha-565881 localhost minikube]
	I0717 00:30:04.115427   37091 provision.go:177] copyRemoteCerts
	I0717 00:30:04.115482   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:30:04.115503   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:04.118744   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.119317   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:04.119347   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.119555   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:04.119745   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.119928   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:04.120090   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:30:04.208734   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:30:04.208802   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:30:04.237408   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:30:04.237489   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:30:04.264010   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:30:04.264070   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:30:04.287879   37091 provision.go:87] duration metric: took 322.51954ms to configureAuth
	I0717 00:30:04.287910   37091 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:30:04.288184   37091 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:30:04.288255   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:04.290649   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.291089   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:04.291116   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.291289   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:04.291470   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.291640   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.291741   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:04.291873   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:04.292044   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:04.292058   37091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:31:35.247731   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:31:35.247757   37091 machine.go:97] duration metric: took 1m31.66325606s to provisionDockerMachine
	I0717 00:31:35.247768   37091 start.go:293] postStartSetup for "ha-565881" (driver="kvm2")
	I0717 00:31:35.247799   37091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:31:35.247824   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.248178   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:31:35.248207   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.251173   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.251605   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.251648   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.251775   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.251956   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.252113   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.252239   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.341073   37091 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:31:35.345318   37091 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:31:35.345349   37091 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:31:35.345409   37091 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:31:35.345487   37091 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:31:35.345496   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:31:35.345577   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:31:35.355014   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:31:35.378321   37091 start.go:296] duration metric: took 130.540009ms for postStartSetup
	I0717 00:31:35.378364   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.378645   37091 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 00:31:35.378668   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.381407   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.381759   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.381777   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.381950   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.382135   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.382269   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.382390   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	W0717 00:31:35.467602   37091 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 00:31:35.467627   37091 fix.go:56] duration metric: took 1m31.904962355s for fixHost
	I0717 00:31:35.467654   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.470742   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.471061   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.471092   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.471293   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.471500   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.471682   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.471811   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.471998   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:31:35.472184   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:31:35.472199   37091 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:31:35.585646   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721176295.539868987
	
	I0717 00:31:35.585669   37091 fix.go:216] guest clock: 1721176295.539868987
	I0717 00:31:35.585675   37091 fix.go:229] Guest: 2024-07-17 00:31:35.539868987 +0000 UTC Remote: 2024-07-17 00:31:35.467636929 +0000 UTC m=+92.028103333 (delta=72.232058ms)
	I0717 00:31:35.585712   37091 fix.go:200] guest clock delta is within tolerance: 72.232058ms
	I0717 00:31:35.585718   37091 start.go:83] releasing machines lock for "ha-565881", held for 1m32.023065415s
	I0717 00:31:35.585737   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.585998   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:31:35.588681   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.589073   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.589105   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.589223   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589658   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589816   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589949   37091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:31:35.590001   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.590075   37091 ssh_runner.go:195] Run: cat /version.json
	I0717 00:31:35.590101   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.592529   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.592811   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.592884   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.592925   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.593058   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.593206   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.593215   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.593229   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.593401   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.593410   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.593555   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.593554   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.593674   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.593812   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.674134   37091 ssh_runner.go:195] Run: systemctl --version
	I0717 00:31:35.702524   37091 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:31:35.860996   37091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:31:35.869782   37091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:31:35.869845   37091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:31:35.878978   37091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 00:31:35.879007   37091 start.go:495] detecting cgroup driver to use...
	I0717 00:31:35.879098   37091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:31:35.895504   37091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:31:35.909937   37091 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:31:35.909986   37091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:31:35.923661   37091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:31:35.937352   37091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:31:36.114537   37091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:31:36.337616   37091 docker.go:233] disabling docker service ...
	I0717 00:31:36.337696   37091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:31:36.368404   37091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:31:36.382665   37091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:31:36.542136   37091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:31:36.694879   37091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:31:36.710588   37091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:31:36.730775   37091 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:31:36.730835   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.742887   37091 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:31:36.742962   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.753720   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.764188   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.774456   37091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:31:36.785055   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.795722   37091 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.806771   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.817066   37091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:31:36.826812   37091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:31:36.836656   37091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:31:36.977073   37091 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:31:46.703564   37091 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.72645615s)
	I0717 00:31:46.703601   37091 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:31:46.703656   37091 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:31:46.708592   37091 start.go:563] Will wait 60s for crictl version
	I0717 00:31:46.708643   37091 ssh_runner.go:195] Run: which crictl
	I0717 00:31:46.712405   37091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:31:46.748919   37091 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:31:46.748989   37091 ssh_runner.go:195] Run: crio --version
	I0717 00:31:46.776791   37091 ssh_runner.go:195] Run: crio --version
	I0717 00:31:46.805919   37091 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:31:46.807247   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:31:46.809680   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:46.810066   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:46.810105   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:46.810335   37091 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:31:46.814801   37091 kubeadm.go:883] updating cluster {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:31:46.814920   37091 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:31:46.814962   37091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:31:46.864570   37091 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:31:46.864592   37091 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:31:46.864662   37091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:31:46.898334   37091 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:31:46.898361   37091 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:31:46.898374   37091 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.2 crio true true} ...
	I0717 00:31:46.898496   37091 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:31:46.898622   37091 ssh_runner.go:195] Run: crio config
	I0717 00:31:46.950419   37091 cni.go:84] Creating CNI manager for ""
	I0717 00:31:46.950449   37091 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:31:46.950466   37091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:31:46.950490   37091 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565881 NodeName:ha-565881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:31:46.950650   37091 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565881"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:31:46.950675   37091 kube-vip.go:115] generating kube-vip config ...
	I0717 00:31:46.950731   37091 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:31:46.962599   37091 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:31:46.962724   37091 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:31:46.962776   37091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:31:46.972441   37091 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:31:46.972515   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:31:46.981722   37091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 00:31:46.998862   37091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:31:47.016994   37091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 00:31:47.040256   37091 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:31:47.056667   37091 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:31:47.061956   37091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:31:47.205261   37091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:31:47.220035   37091 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.238
	I0717 00:31:47.220059   37091 certs.go:194] generating shared ca certs ...
	I0717 00:31:47.220074   37091 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.220232   37091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:31:47.220289   37091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:31:47.220306   37091 certs.go:256] generating profile certs ...
	I0717 00:31:47.220405   37091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:31:47.220439   37091 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d
	I0717 00:31:47.220463   37091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.14 192.168.39.97 192.168.39.254]
	I0717 00:31:47.358180   37091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d ...
	I0717 00:31:47.358210   37091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d: {Name:mkbe0bb2172102aa8c7ea4b23ce0c7fe570174cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.358402   37091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d ...
	I0717 00:31:47.358423   37091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d: {Name:mkbcb38a702d9304a89a7717b83e8333c6851c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.358518   37091 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:31:47.358723   37091 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:31:47.358880   37091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:31:47.358905   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:31:47.358923   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:31:47.358947   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:31:47.358964   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:31:47.358980   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:31:47.358996   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:31:47.359014   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:31:47.359031   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:31:47.359093   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:31:47.359132   37091 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:31:47.359146   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:31:47.359174   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:31:47.359203   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:31:47.359237   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:31:47.359289   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:31:47.359329   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.359349   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.359367   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.359929   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:31:47.386164   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:31:47.410527   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:31:47.434465   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:31:47.456999   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 00:31:47.480811   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:31:47.503411   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:31:47.526710   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:31:47.549885   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:31:47.573543   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:31:47.598119   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:31:47.621760   37091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:31:47.638631   37091 ssh_runner.go:195] Run: openssl version
	I0717 00:31:47.645238   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:31:47.655857   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.660235   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.660292   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.665757   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:31:47.674979   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:31:47.685757   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.689981   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.690028   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.695412   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:31:47.704384   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:31:47.714711   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.718924   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.718961   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.724398   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:31:47.733669   37091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:31:47.737932   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 00:31:47.743392   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 00:31:47.748664   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 00:31:47.753938   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 00:31:47.759225   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 00:31:47.764447   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 00:31:47.769709   37091 kubeadm.go:392] StartCluster: {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:31:47.769816   37091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:31:47.769867   37091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:31:47.806048   37091 cri.go:89] found id: "05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0"
	I0717 00:31:47.806070   37091 cri.go:89] found id: "42119e9324f11f4297cf4f2052d5440773e17236489ca34e1988564acce85cc1"
	I0717 00:31:47.806075   37091 cri.go:89] found id: "8b3db903a1f836c172e85c6e6229a0500c4729281c2733ba22e09d38ec08964b"
	I0717 00:31:47.806079   37091 cri.go:89] found id: "404747229eea4d41bdc771562fc8b910464a0694c31f9ae117eeaec79057382d"
	I0717 00:31:47.806083   37091 cri.go:89] found id: "dcda7fe2ea87d9d0412fd424de512c60b84b972996e99cbd410f5a517bb7bf6a"
	I0717 00:31:47.806087   37091 cri.go:89] found id: "928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519"
	I0717 00:31:47.806091   37091 cri.go:89] found id: "cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c"
	I0717 00:31:47.806095   37091 cri.go:89] found id: "52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146"
	I0717 00:31:47.806099   37091 cri.go:89] found id: "e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f"
	I0717 00:31:47.806106   37091 cri.go:89] found id: "14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5"
	I0717 00:31:47.806111   37091 cri.go:89] found id: "1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6"
	I0717 00:31:47.806115   37091 cri.go:89] found id: "ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36"
	I0717 00:31:47.806120   37091 cri.go:89] found id: "2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c"
	I0717 00:31:47.806127   37091 cri.go:89] found id: "c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff"
	I0717 00:31:47.806132   37091 cri.go:89] found id: ""
	I0717 00:31:47.806177   37091 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.745610706Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60a3208c-4ae8-45b2-990f-b8a58b095189 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.747019783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ca05205-e5d4-427a-a3e8-435cf6f5fcf7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.747596935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176877747571385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ca05205-e5d4-427a-a3e8-435cf6f5fcf7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.748218544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e31f800c-ece6-46be-ab1d-db33af7aea53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.748278578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e31f800c-ece6-46be-ab1d-db33af7aea53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.748886101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721176356567613396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d
197e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d80b5690981eea250dda269acc5562685be31b48b3a961a26ef1b506571436b,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176312655469041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9
a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e1
54d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176312435096590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annot
ations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]strin
g{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa1394
53522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e31f800c-ece6-46be-ab1d-db33af7aea53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.803830418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7ab7613-ae76-4888-b4b5-d41439813d8b name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.803909995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7ab7613-ae76-4888-b4b5-d41439813d8b name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.804848445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78ac8bcd-095a-41a6-b88a-cfe59a51df22 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.805357694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176877805327411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78ac8bcd-095a-41a6-b88a-cfe59a51df22 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.805910162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b5acbbc-cbfc-4069-891a-b7f414994a4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.805966490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b5acbbc-cbfc-4069-891a-b7f414994a4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.806358547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721176356567613396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d
197e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d80b5690981eea250dda269acc5562685be31b48b3a961a26ef1b506571436b,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176312655469041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9
a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e1
54d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176312435096590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annot
ations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]strin
g{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa1394
53522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b5acbbc-cbfc-4069-891a-b7f414994a4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.856185974Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c15d2de7-37c9-43f7-8a03-5fb2e02d5d29 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.857017877Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-sxdsp,Uid:7a532a93-0ab1-4911-b7f5-9d85eda2be75,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176345684424854,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:23:21.627315007Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-565881,Uid:a56a7652e75cdb2280ae1925adea5b0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1721176326464404358,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{kubernetes.io/config.hash: a56a7652e75cdb2280ae1925adea5b0d,kubernetes.io/config.seen: 2024-07-17T00:31:47.012676746Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7wsqq,Uid:4a433e03-decb-405d-82f1-b14a72412c8a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176312113119447,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-17T00:21:07.213868280Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565881,Uid:b826e45ce780868932f8d9a5a17c6b9c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176312054831247,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b826e45ce780868932f8d9a5a17c6b9c,kubernetes.io/config.seen: 2024-07-17T00:20:39.498541925Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xftzx,Uid:01fe6b06-0568-4da7-bd0c-1883bc99995c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READ
Y,CreatedAt:1721176312040120150,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:21:07.214009072Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&PodSandboxMetadata{Name:etcd-ha-565881,Uid:5f82fe075280b90a17d8f04a23fc7629,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176312039203893,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.238:2379,k
ubernetes.io/config.hash: 5f82fe075280b90a17d8f04a23fc7629,kubernetes.io/config.seen: 2024-07-17T00:20:39.498535926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-565881,Uid:960ed960c6610568e154d20884b393df,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176312014591283,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 960ed960c6610568e154d20884b393df,kubernetes.io/config.seen: 2024-07-17T00:20:39.498540922Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&PodSandboxMetadata{Na
me:storage-provisioner,Uid:0aa1050a-43e1-4f7a-a2df-80cafb48e673,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176312005348078,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNe
twork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T00:21:07.209314242Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&PodSandboxMetadata{Name:kindnet-5lrdt,Uid:bd3c879a-726b-40ed-ba4f-897bf43cda26,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176311997619123,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:20:52.903992589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc60e96519276152aef10c68f24dedda86aa0afe25a495
4e53f8ce951fc0e31f,Metadata:&PodSandboxMetadata{Name:kube-proxy-7p2jl,Uid:74f5aff6-5e99-4cfe-af04-94198e8d9616,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176311989194719,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:20:52.887845117Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-565881,Uid:137a148a990fa52e8281e355098ea021,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721176311971550595,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.238:8443,kubernetes.io/config.hash: 137a148a990fa52e8281e355098ea021,kubernetes.io/config.seen: 2024-07-17T00:20:39.498539804Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xftzx,Uid:01fe6b06-0568-4da7-bd0c-1883bc99995c,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1721176296018549739,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:21:07.214009072Z,kubernetes.
io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-sxdsp,Uid:7a532a93-0ab1-4911-b7f5-9d85eda2be75,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721175801959385834,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:23:21.627315007Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7wsqq,Uid:4a433e03-decb-405d-82f1-b14a72412c8a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721175667539564286,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:21:07.213868280Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&PodSandboxMetadata{Name:kube-proxy-7p2jl,Uid:74f5aff6-5e99-4cfe-af04-94198e8d9616,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721175653220303170,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:20:52.887845117Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&PodSandboxMetadata{Name:kindnet-5lrdt,Uid:bd3c879a-726b-40ed-ba4f-897bf43cda26,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721175653218014976,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T00:20:52.903992589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&PodSandboxMetadata{Name:etcd-ha-565881,Uid:5f82fe075280b90a17d8f04a23fc7629,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721175633118938420,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.238:2379,kubernetes.io/config.hash: 5f82fe075280b90a17d8f04a23fc7629,kubernetes.io/config.seen: 2024-07-17T00:20:32.635373262Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565881,Uid:b826e45ce780868932f8d9a5a17c6b9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721175633092007394,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b826e45c
e780868932f8d9a5a17c6b9c,kubernetes.io/config.seen: 2024-07-17T00:20:32.635367069Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c15d2de7-37c9-43f7-8a03-5fb2e02d5d29 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.858229886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00b47a3a-9bac-41c6-a96e-adeccef4f669 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.858333936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00b47a3a-9bac-41c6-a96e-adeccef4f669 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.860020497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721176356567613396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d
197e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d80b5690981eea250dda269acc5562685be31b48b3a961a26ef1b506571436b,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176312655469041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9
a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e1
54d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176312435096590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annot
ations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]strin
g{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa1394
53522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00b47a3a-9bac-41c6-a96e-adeccef4f669 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.869343105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=888fb154-5308-4d3c-948d-dd55282e1a64 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.869410318Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=888fb154-5308-4d3c-948d-dd55282e1a64 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.871204205Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a67cb8d-2319-41af-80d6-a3b6ff2bad82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.871644928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176877871623961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a67cb8d-2319-41af-80d6-a3b6ff2bad82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.872292796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46a0f85d-8dce-4a7e-b173-1ee32be7d18a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.872347030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46a0f85d-8dce-4a7e-b173-1ee32be7d18a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:17 ha-565881 crio[3887]: time="2024-07-17 00:41:17.872937359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721176356567613396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d
197e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d80b5690981eea250dda269acc5562685be31b48b3a961a26ef1b506571436b,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176312655469041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9
a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e1
54d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176312435096590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annot
ations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]strin
g{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa1394
53522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46a0f85d-8dce-4a7e-b173-1ee32be7d18a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16cb08b90a179       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       4                   002ff42b3204b       storage-provisioner
	afd50ddb3c371       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      8 minutes ago       Running             kube-apiserver            3                   03f0287dade77       kube-apiserver-ha-565881
	1293602792aa7       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      8 minutes ago       Running             kube-controller-manager   2                   be9e8898804ae       kube-controller-manager-ha-565881
	d5d1de2fa4b27       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      8 minutes ago       Running             busybox                   1                   6291ee1cd24ee       busybox-fc5497c4f-sxdsp
	c56dc46091fa9       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   249e7577f5374       kube-vip-ha-565881
	6e4b939607467       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   2                   455c360925911       coredns-7db6d8ff4d-xftzx
	02a737bcd9b5f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      9 minutes ago       Running             kube-proxy                1                   bc60e96519276       kube-proxy-7p2jl
	410067f28bfdb       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      9 minutes ago       Running             kindnet-cni               1                   9ba19e8f07eab       kindnet-5lrdt
	7d80b5690981e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       3                   002ff42b3204b       storage-provisioner
	7a7dd9858b20e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   1                   a5da1d6907439       coredns-7db6d8ff4d-7wsqq
	fb316c8a568ce       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Running             etcd                      1                   b18ab0c603ba0       etcd-ha-565881
	85245e283143e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      9 minutes ago       Running             kube-scheduler            1                   2f58179b1c60f       kube-scheduler-ha-565881
	583c9df0a3d19       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      9 minutes ago       Exited              kube-controller-manager   1                   be9e8898804ae       kube-controller-manager-ha-565881
	e24716b903522       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      9 minutes ago       Exited              kube-apiserver            2                   03f0287dade77       kube-apiserver-ha-565881
	05847440b65b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   1                   3acd7d3f5c21f       coredns-7db6d8ff4d-xftzx
	28b495a055524       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   17 minutes ago      Exited              busybox                   0                   e0bd927bf2760       busybox-fc5497c4f-sxdsp
	928ee85bf546b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Exited              coredns                   0                   f688446a5f59c       coredns-7db6d8ff4d-7wsqq
	52b45808cde82       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    20 minutes ago      Exited              kindnet-cni               0                   5c5494014c8b1       kindnet-5lrdt
	e572bb9aec2e8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      20 minutes ago      Exited              kube-proxy                0                   12f43031f4b04       kube-proxy-7p2jl
	1ec015ce8f841       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      20 minutes ago      Exited              kube-scheduler            0                   a6e2148781333       kube-scheduler-ha-565881
	ab8577693652f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Exited              etcd                      0                   afbb712100717       etcd-ha-565881
	
	
	==> coredns [05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45091 - 7026 "HINFO IN 1445449914924310106.5846422275679746414. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012221557s
	
	
	==> coredns [6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131] <==
	[INFO] plugin/kubernetes: Trace[719800634]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:32:04.225) (total time: 13562ms):
	Trace[719800634]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33770->10.96.0.1:443: read: connection reset by peer 13562ms (00:32:17.787)
	Trace[719800634]: [13.562660788s] [13.562660788s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33770->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33240->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33216->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33216->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[693255936]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:32:01.753) (total time: 10001ms):
	Trace[693255936]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:32:11.754)
	Trace[693255936]: [10.001792401s] [10.001792401s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:55726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:55726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519] <==
	[INFO] 10.244.0.4:59609 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005296s
	[INFO] 10.244.0.4:41601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174604s
	[INFO] 10.244.2.2:54282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144175s
	[INFO] 10.244.2.2:33964 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000291713s
	[INFO] 10.244.2.2:38781 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098409s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132708s
	[INFO] 10.244.2.2:42857 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129277s
	[INFO] 10.244.2.2:45518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176537s
	[INFO] 10.244.1.2:38437 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111768s
	[INFO] 10.244.1.2:41860 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000210674s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1916&timeout=7m36s&timeoutSeconds=456&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1941&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1217777566]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:29:52.269) (total time: 10589ms):
	Trace[1217777566]: ---"Objects listed" error:Unauthorized 10588ms (00:30:02.858)
	Trace[1217777566]: [10.589536721s] [10.589536721s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1846856979]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:29:52.635) (total time: 10227ms):
	Trace[1846856979]: ---"Objects listed" error:Unauthorized 10226ms (00:30:02.861)
	Trace[1846856979]: [10.227274956s] [10.227274956s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-565881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_20_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:20:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:41:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:37:56 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:37:56 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:37:56 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:37:56 +0000   Wed, 17 Jul 2024 00:21:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-565881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6899f2542334306bf4c50f49702dfb5
	  System UUID:                c6899f25-4233-4306-bf4c-50f49702dfb5
	  Boot ID:                    f5b041e8-ae19-4f7a-ac0d-a039fbca796b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sxdsp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-7wsqq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 coredns-7db6d8ff4d-xftzx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-ha-565881                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-5lrdt                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-565881             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-ha-565881    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-7p2jl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-565881             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-vip-ha-565881                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 20m                  kube-proxy       
	  Normal   Starting                 8m40s                kube-proxy       
	  Normal   NodeHasSufficientPID     20m                  kubelet          Node ha-565881 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  20m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m                  kubelet          Node ha-565881 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                  kubelet          Node ha-565881 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m                  node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal   NodeReady                20m                  kubelet          Node ha-565881 status is now: NodeReady
	  Normal   RegisteredNode           19m                  node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Warning  ContainerGCFailed        9m39s (x2 over 10m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m33s                node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal   RegisteredNode           8m27s                node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	
	
	Name:               ha-565881-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_21_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:38:26 +0000   Wed, 17 Jul 2024 00:32:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:38:26 +0000   Wed, 17 Jul 2024 00:32:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:38:26 +0000   Wed, 17 Jul 2024 00:32:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:38:26 +0000   Wed, 17 Jul 2024 00:32:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-565881-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 002cfcb8afdc450f9dbf024dbe1dd968
	  System UUID:                002cfcb8-afdc-450f-9dbf-024dbe1dd968
	  Boot ID:                    09d30567-9ab8-4527-b894-0f75dcd209ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rdpwj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-565881-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-k882n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-565881-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-565881-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-2f9rj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-565881-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-565881-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 19m                  kube-proxy       
	  Normal  Starting                 8m22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)    kubelet          Node ha-565881-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)    kubelet          Node ha-565881-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)    kubelet          Node ha-565881-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           19m                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           18m                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  NodeNotReady             16m                  node-controller  Node ha-565881-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m7s (x8 over 9m7s)  kubelet          Node ha-565881-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m7s (x8 over 9m7s)  kubelet          Node ha-565881-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m7s (x7 over 9m7s)  kubelet          Node ha-565881-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m33s                node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           8m27s                node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	
	
	Name:               ha-565881-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_22_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:22:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:41:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:38:53 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:38:53 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:38:53 +0000   Wed, 17 Jul 2024 00:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:38:53 +0000   Wed, 17 Jul 2024 00:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-565881-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d0000c1f74247c095cd9247f3f0c350
	  System UUID:                3d0000c1-f742-47c0-95cd-9247f3f0c350
	  Boot ID:                    413dfe81-e41d-443f-aa23-09a71acaf475
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lmz4q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-565881-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-ctstx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-565881-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-565881-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-k5x6x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-565881-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-565881-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 7m56s              kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node ha-565881-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node ha-565881-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-565881-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal   RegisteredNode           8m33s              node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal   RegisteredNode           8m27s              node-controller  Node ha-565881-m03 event: Registered Node ha-565881-m03 in Controller
	  Normal   NodeAllocatableEnforced  8m3s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m3s               kubelet          Node ha-565881-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m3s               kubelet          Node ha-565881-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m3s               kubelet          Node ha-565881-m03 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m3s               kubelet          Starting kubelet.
	  Warning  Rebooted                 8m2s               kubelet          Node ha-565881-m03 has been rebooted, boot id: 413dfe81-e41d-443f-aa23-09a71acaf475
	
	
	Name:               ha-565881-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_23_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:23:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:27:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:33:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:33:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:33:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:33:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-565881-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008ae63d929d475b8bab60c832202ce9
	  System UUID:                008ae63d-929d-475b-8bab-60c832202ce9
	  Boot ID:                    3540bc22-336a-438e-8b63-852810ced32c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-xz7nj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-proxy-p5xml    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  RegisteredNode           17m                node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node ha-565881-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node ha-565881-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node ha-565881-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-565881-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m33s              node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  RegisteredNode           8m27s              node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeNotReady             7m53s              node-controller  Node ha-565881-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +8.825427] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057593] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065677] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.195559] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.109938] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261884] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129275] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.597572] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.062309] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075955] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.082514] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.034910] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 00:21] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.822749] kauditd_printk_skb: 24 callbacks suppressed
	[Jul17 00:28] kauditd_printk_skb: 1 callbacks suppressed
	[Jul17 00:31] systemd-fstab-generator[3692]: Ignoring "noauto" option for root device
	[  +0.216028] systemd-fstab-generator[3758]: Ignoring "noauto" option for root device
	[  +0.227364] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +0.155492] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[  +0.283345] systemd-fstab-generator[3868]: Ignoring "noauto" option for root device
	[ +10.230037] systemd-fstab-generator[3997]: Ignoring "noauto" option for root device
	[  +0.086916] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.012354] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.429241] kauditd_printk_skb: 73 callbacks suppressed
	[Jul17 00:32] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36] <==
	2024/07/17 00:30:04 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-17T00:30:04.424266Z","caller":"traceutil/trace.go:171","msg":"trace[2084447961] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"568.016613ms","start":"2024-07-17T00:30:03.856238Z","end":"2024-07-17T00:30:04.424254Z","steps":["trace[2084447961] 'agreement among raft nodes before linearized reading'  (duration: 553.842388ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:30:04.429786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:30:03.856232Z","time spent":"573.541514ms","remote":"127.0.0.1:35722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	2024/07/17 00:30:04 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:30:04.576153Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10056697113903918594,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-17T00:30:04.687881Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.238:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:30:04.687938Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.238:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:30:04.688033Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fff3906243738b90","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T00:30:04.68823Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688593Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688783Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.68891Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688995Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.689036Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.689044Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689055Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.68908Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689157Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689409Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689456Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.692456Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-17T00:30:04.692685Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-17T00:30:04.692785Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-565881","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"]}
	
	
	==> etcd [fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5] <==
	{"level":"warn","ts":"2024-07-17T00:40:53.415651Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:40:54.132232Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:40:54.132298Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:40:58.134285Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:40:58.134357Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:40:58.416624Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:40:58.41663Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:02.135942Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:02.136011Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:03.417604Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:03.417818Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:06.137483Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:06.137559Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:08.418261Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:08.418401Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:10.139391Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:10.139466Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:13.419169Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:13.419254Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:14.141412Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:14.141591Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:18.142923Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:18.142977Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:18.421817Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:18.421909Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:41:18 up 21 min,  0 users,  load average: 0.58, 0.45, 0.38
	Linux ha-565881 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e] <==
	I0717 00:40:43.902182       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:40:53.894902       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:40:53.895027       1 main.go:303] handling current node
	I0717 00:40:53.895056       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:40:53.895089       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:40:53.895254       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:40:53.895343       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:40:53.895451       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:40:53.895472       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:41:03.895143       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:41:03.895335       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:41:03.895622       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:41:03.895675       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:41:03.895874       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:41:03.895903       1 main.go:303] handling current node
	I0717 00:41:03.895941       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:41:03.895958       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:41:13.898969       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:41:13.899058       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:41:13.899243       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:41:13.899269       1 main.go:303] handling current node
	I0717 00:41:13.899294       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:41:13.899299       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:41:13.899348       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:41:13.899367       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146] <==
	I0717 00:29:36.730573       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:36.730625       1 main.go:303] handling current node
	I0717 00:29:36.730649       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:36.730656       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:29:36.730916       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:36.730951       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:36.731053       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:36.731081       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:46.727855       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:46.727918       1 main.go:303] handling current node
	I0717 00:29:46.727931       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:46.727937       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:29:46.728154       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:46.728180       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:46.728251       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:46.728270       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:56.722847       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:56.722880       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:56.723110       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:56.723136       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:56.723203       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:56.723223       1 main.go:303] handling current node
	I0717 00:29:56.723239       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:56.723243       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	E0717 00:30:02.872654       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53] <==
	I0717 00:32:38.483238       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0717 00:32:38.483272       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 00:32:38.483288       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 00:32:38.571245       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 00:32:38.575686       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 00:32:38.576230       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 00:32:38.577277       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 00:32:38.577354       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 00:32:38.578167       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 00:32:38.578274       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:32:38.578363       1 policy_source.go:224] refreshing policies
	I0717 00:32:38.580666       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 00:32:38.584193       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 00:32:38.584289       1 aggregator.go:165] initial CRD sync complete...
	I0717 00:32:38.584324       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 00:32:38.584329       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 00:32:38.584335       1 cache.go:39] Caches are synced for autoregister controller
	I0717 00:32:38.585962       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 00:32:38.664363       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0717 00:32:38.713497       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.14]
	I0717 00:32:38.715905       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:32:38.726599       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 00:32:38.738160       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 00:32:39.488518       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 00:32:39.957907       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.14 192.168.39.238]
	
	
	==> kube-apiserver [e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2] <==
	I0717 00:31:53.403645       1 options.go:221] external host was not specified, using 192.168.39.238
	I0717 00:31:53.408787       1 server.go:148] Version: v1.30.2
	I0717 00:31:53.408890       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:31:54.031222       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 00:31:54.036627       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:31:54.039836       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 00:31:54.039910       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 00:31:54.040103       1 instance.go:299] Using reconciler: lease
	W0717 00:32:14.030009       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0717 00:32:14.030012       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 00:32:14.041256       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c] <==
	I0717 00:32:51.711449       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0717 00:32:51.726926       1 shared_informer.go:320] Caches are synced for endpoint
	I0717 00:32:51.743929       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:32:51.749150       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0717 00:32:51.749408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.375µs"
	I0717 00:32:51.749453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.869µs"
	I0717 00:32:51.777015       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 00:32:51.777557       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0717 00:32:51.788853       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:32:51.799425       1 shared_informer.go:320] Caches are synced for deployment
	I0717 00:32:51.807308       1 shared_informer.go:320] Caches are synced for disruption
	I0717 00:32:52.194125       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:32:52.194165       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 00:32:52.219876       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:32:57.613255       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-r95fq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-r95fq\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 00:32:57.614215       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d3ba4588-dd39-49d8-9dff-c1b4d5aa821c", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-r95fq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-r95fq": the object has been modified; please apply your changes to the latest version and try again
	I0717 00:32:57.639058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="101.183292ms"
	I0717 00:32:57.683113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.92544ms"
	I0717 00:32:57.683299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.568µs"
	I0717 00:32:59.328956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.966331ms"
	I0717 00:32:59.329054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.845µs"
	I0717 00:33:16.923162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.133862ms"
	I0717 00:33:16.923535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.167µs"
	I0717 00:33:25.046500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.481722ms"
	I0717 00:33:25.047235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.736µs"
	
	
	==> kube-controller-manager [583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b] <==
	I0717 00:31:53.451857       1 serving.go:380] Generated self-signed cert in-memory
	I0717 00:31:54.330220       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 00:31:54.330410       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:31:54.332377       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 00:31:54.332563       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 00:31:54.333171       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 00:31:54.333111       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0717 00:32:15.048922       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.238:8443/healthz\": dial tcp 192.168.39.238:8443: connect: connection refused"
	
	
	==> kube-proxy [02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96] <==
	E0717 00:32:18.747232       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565881\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 00:32:37.188740       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565881\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0717 00:32:37.188819       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0717 00:32:37.274295       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:32:37.274381       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:32:37.274407       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:32:37.281013       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:32:37.288028       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:32:37.288061       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:32:37.302361       1 config.go:192] "Starting service config controller"
	I0717 00:32:37.302412       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:32:37.302433       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:32:37.302437       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:32:37.309370       1 config.go:319] "Starting node config controller"
	I0717 00:32:37.309425       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0717 00:32:40.251145       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0717 00:32:40.251120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:32:40.251290       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:32:40.251308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:32:40.251369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:32:40.251382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:32:40.251422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:32:41.302914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:32:41.509914       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:32:41.702629       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f] <==
	E0717 00:28:59.070494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139814       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139854       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.284785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.285574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.285577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:17.500214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:17.500543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:20.571289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:20.571426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:20.571642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:20.572230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:39.005784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:39.006082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:42.075640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:42.075804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:42.075993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:42.076034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6] <==
	W0717 00:29:59.991637       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:29:59.991777       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:30:00.157330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:00.157432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:30:00.277280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:30:00.277385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:30:00.411889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:30:00.411986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:30:00.443474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:30:00.443527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:30:00.612176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:30:00.612230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:30:00.899110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:30:00.899224       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:30:01.520074       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:30:01.520137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:30:01.959591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:30:01.959649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:30:02.057654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:30:02.057750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:30:02.126931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:30:02.126999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:30:02.459997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:02.460096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:04.395180       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832] <==
	W0717 00:32:31.202397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.238:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:31.202482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.238:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:31.486990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.238:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:31.487057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.238:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:31.868585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.238:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:31.868685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.238:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:32.160842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.238:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:32.160995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.238:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:33.000923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.238:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:33.000999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.238:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:33.390403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.238:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:33.390520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.238:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:33.850159       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:33.850264       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:34.946080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:34.946141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:35.143908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.238:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:35.143978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.238:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:35.185931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.238:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:35.186052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.238:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:35.518309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.238:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:35.518457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.238:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:35.579179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:35.579240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	I0717 00:32:50.859089       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:36:39 ha-565881 kubelet[1370]: E0717 00:36:39.573603    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:36:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:36:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:36:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:36:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:37:39 ha-565881 kubelet[1370]: E0717 00:37:39.573148    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:37:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:37:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:37:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:37:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:38:39 ha-565881 kubelet[1370]: E0717 00:38:39.579086    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:38:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:38:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:38:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:38:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:39:39 ha-565881 kubelet[1370]: E0717 00:39:39.572324    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:39:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:39:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:39:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:39:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:40:39 ha-565881 kubelet[1370]: E0717 00:40:39.577783    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:40:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:40:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:40:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:40:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 00:41:17.421044   39776 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19265-12897/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565881 -n ha-565881
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-ha-565881-m03 kube-controller-manager-ha-565881-m03 kube-scheduler-ha-565881-m03 kube-vip-ha-565881-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-565881 describe pod etcd-ha-565881-m03 kube-controller-manager-ha-565881-m03 kube-scheduler-ha-565881-m03 kube-vip-ha-565881-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-565881 describe pod etcd-ha-565881-m03 kube-controller-manager-ha-565881-m03 kube-scheduler-ha-565881-m03 kube-vip-ha-565881-m03: exit status 1 (64.853437ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-565881-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-565881-m03" not found
	Error from server (NotFound): pods "kube-scheduler-ha-565881-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-565881-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-565881 describe pod etcd-ha-565881-m03 kube-controller-manager-ha-565881-m03 kube-scheduler-ha-565881-m03 kube-vip-ha-565881-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (798.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-565881 node delete m03 -v=7 --alsologtostderr: (10.686820055s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 7 (476.506809ms)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:41:30.312470   40049 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:41:30.312591   40049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:30.312602   40049 out.go:304] Setting ErrFile to fd 2...
	I0717 00:41:30.312609   40049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:30.312836   40049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:41:30.313016   40049 out.go:298] Setting JSON to false
	I0717 00:41:30.313045   40049 mustload.go:65] Loading cluster: ha-565881
	I0717 00:41:30.313177   40049 notify.go:220] Checking for updates...
	I0717 00:41:30.313560   40049 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:41:30.313577   40049 status.go:255] checking status of ha-565881 ...
	I0717 00:41:30.314016   40049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:30.314069   40049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:30.332525   40049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0717 00:41:30.332992   40049 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:30.333541   40049 main.go:141] libmachine: Using API Version  1
	I0717 00:41:30.333562   40049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:30.333923   40049 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:30.334122   40049 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:41:30.335979   40049 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:41:30.336006   40049 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:41:30.336333   40049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:30.336373   40049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:30.350449   40049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0717 00:41:30.350770   40049 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:30.351227   40049 main.go:141] libmachine: Using API Version  1
	I0717 00:41:30.351245   40049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:30.351525   40049 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:30.351702   40049 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:41:30.354361   40049 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:41:30.354744   40049 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:41:30.354777   40049 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:41:30.354851   40049 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:41:30.355141   40049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:30.355175   40049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:30.369673   40049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I0717 00:41:30.370032   40049 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:30.370452   40049 main.go:141] libmachine: Using API Version  1
	I0717 00:41:30.370472   40049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:30.370764   40049 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:30.370955   40049 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:41:30.371139   40049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:41:30.371163   40049 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:41:30.373916   40049 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:41:30.374335   40049 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:41:30.374362   40049 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:41:30.374466   40049 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:41:30.374592   40049 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:41:30.374734   40049 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:41:30.374859   40049 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:41:30.461240   40049 ssh_runner.go:195] Run: systemctl --version
	I0717 00:41:30.468824   40049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:41:30.485669   40049 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:41:30.485694   40049 api_server.go:166] Checking apiserver status ...
	I0717 00:41:30.485720   40049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:41:30.499850   40049 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5154/cgroup
	W0717 00:41:30.510873   40049 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5154/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:41:30.510935   40049 ssh_runner.go:195] Run: ls
	I0717 00:41:30.515562   40049 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:41:30.521796   40049 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:41:30.521821   40049 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:41:30.521859   40049 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:41:30.521886   40049 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:41:30.522166   40049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:30.522205   40049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:30.537889   40049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I0717 00:41:30.538326   40049 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:30.538821   40049 main.go:141] libmachine: Using API Version  1
	I0717 00:41:30.538843   40049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:30.539162   40049 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:30.539380   40049 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:41:30.541131   40049 status.go:330] ha-565881-m02 host status = "Running" (err=<nil>)
	I0717 00:41:30.541159   40049 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:41:30.541424   40049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:30.541454   40049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:30.556506   40049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0717 00:41:30.556856   40049 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:30.557318   40049 main.go:141] libmachine: Using API Version  1
	I0717 00:41:30.557344   40049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:30.557671   40049 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:30.557892   40049 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:41:30.560775   40049 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:41:30.561204   40049 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:31:58 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:41:30.561228   40049 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:41:30.561357   40049 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:41:30.561668   40049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:30.561706   40049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:30.577307   40049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0717 00:41:30.577766   40049 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:30.578223   40049 main.go:141] libmachine: Using API Version  1
	I0717 00:41:30.578243   40049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:30.578512   40049 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:30.578643   40049 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:41:30.578836   40049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:41:30.578854   40049 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:41:30.581371   40049 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:41:30.581753   40049 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:31:58 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:41:30.581775   40049 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:41:30.581887   40049 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:41:30.582049   40049 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:41:30.582206   40049 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:41:30.582322   40049 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:41:30.666259   40049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:41:30.689496   40049 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:41:30.689527   40049 api_server.go:166] Checking apiserver status ...
	I0717 00:41:30.689561   40049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:41:30.707985   40049 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1658/cgroup
	W0717 00:41:30.718897   40049 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1658/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:41:30.718953   40049 ssh_runner.go:195] Run: ls
	I0717 00:41:30.724141   40049 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:41:30.728317   40049 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:41:30.728336   40049 status.go:422] ha-565881-m02 apiserver status = Running (err=<nil>)
	I0717 00:41:30.728346   40049 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:41:30.728373   40049 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:41:30.728763   40049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:30.728802   40049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:30.744339   40049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33717
	I0717 00:41:30.744762   40049 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:30.745215   40049 main.go:141] libmachine: Using API Version  1
	I0717 00:41:30.745240   40049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:30.745529   40049 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:30.745712   40049 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:41:30.747505   40049 status.go:330] ha-565881-m04 host status = "Stopped" (err=<nil>)
	I0717 00:41:30.747518   40049 status.go:343] host is not running, skipping remaining checks
	I0717 00:41:30.747523   40049 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565881 -n ha-565881
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565881 logs -n 25: (1.658943292s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m04 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp testdata/cp-test.txt                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881:/home/docker/cp-test_ha-565881-m04_ha-565881.txt                      |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881 sudo cat                                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881.txt                                |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m02:/home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03:/home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m03 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-565881 node stop m02 -v=7                                                    | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-565881 node start m02 -v=7                                                   | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:27 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-565881 -v=7                                                          | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-565881 -v=7                                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-565881 --wait=true -v=7                                                   | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-565881                                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	| node    | ha-565881 node delete m03 -v=7                                                  | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:30:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:30:03.472958   37091 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:30:03.473178   37091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:30:03.473186   37091 out.go:304] Setting ErrFile to fd 2...
	I0717 00:30:03.473190   37091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:30:03.473344   37091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:30:03.473853   37091 out.go:298] Setting JSON to false
	I0717 00:30:03.474716   37091 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4352,"bootTime":1721171851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:30:03.474771   37091 start.go:139] virtualization: kvm guest
	I0717 00:30:03.477060   37091 out.go:177] * [ha-565881] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:30:03.478329   37091 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:30:03.478403   37091 notify.go:220] Checking for updates...
	I0717 00:30:03.480995   37091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:30:03.482344   37091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:30:03.483547   37091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:30:03.484814   37091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:30:03.485998   37091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:30:03.487571   37091 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:30:03.487666   37091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:30:03.488110   37091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:30:03.488183   37091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:30:03.502769   37091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0717 00:30:03.503194   37091 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:30:03.503743   37091 main.go:141] libmachine: Using API Version  1
	I0717 00:30:03.503765   37091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:30:03.504103   37091 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:30:03.504301   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.541510   37091 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:30:03.542844   37091 start.go:297] selected driver: kvm2
	I0717 00:30:03.542856   37091 start.go:901] validating driver "kvm2" against &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:30:03.543000   37091 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:30:03.543351   37091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:30:03.543431   37091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:30:03.558318   37091 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:30:03.559016   37091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:30:03.559046   37091 cni.go:84] Creating CNI manager for ""
	I0717 00:30:03.559054   37091 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:30:03.559112   37091 start.go:340] cluster config:
	{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:30:03.559252   37091 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:30:03.561017   37091 out.go:177] * Starting "ha-565881" primary control-plane node in "ha-565881" cluster
	I0717 00:30:03.562183   37091 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:30:03.562210   37091 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:30:03.562219   37091 cache.go:56] Caching tarball of preloaded images
	I0717 00:30:03.562282   37091 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:30:03.562291   37091 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:30:03.562398   37091 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:30:03.562605   37091 start.go:360] acquireMachinesLock for ha-565881: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:30:03.562643   37091 start.go:364] duration metric: took 22.287µs to acquireMachinesLock for "ha-565881"
	I0717 00:30:03.562657   37091 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:30:03.562665   37091 fix.go:54] fixHost starting: 
	I0717 00:30:03.562913   37091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:30:03.562942   37091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:30:03.577346   37091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0717 00:30:03.577771   37091 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:30:03.578283   37091 main.go:141] libmachine: Using API Version  1
	I0717 00:30:03.578307   37091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:30:03.578612   37091 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:30:03.578778   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.578956   37091 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:30:03.580457   37091 fix.go:112] recreateIfNeeded on ha-565881: state=Running err=<nil>
	W0717 00:30:03.580473   37091 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:30:03.583293   37091 out.go:177] * Updating the running kvm2 "ha-565881" VM ...
	I0717 00:30:03.584488   37091 machine.go:94] provisionDockerMachine start ...
	I0717 00:30:03.584508   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.584718   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.586840   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.587288   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.587320   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.587446   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.587598   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.587745   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.587877   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.588058   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.588246   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.588256   37091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:30:03.705684   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:30:03.705712   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.705945   37091 buildroot.go:166] provisioning hostname "ha-565881"
	I0717 00:30:03.705986   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.706223   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.708858   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.709223   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.709249   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.709419   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.709680   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.709842   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.709989   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.710164   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.710330   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.710374   37091 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881 && echo "ha-565881" | sudo tee /etc/hostname
	I0717 00:30:03.843470   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:30:03.843498   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.846412   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.846780   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.846804   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.847036   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.847216   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.847358   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.847507   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.847645   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.847802   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.847816   37091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:30:03.965266   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:30:03.965298   37091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:30:03.965331   37091 buildroot.go:174] setting up certificates
	I0717 00:30:03.965342   37091 provision.go:84] configureAuth start
	I0717 00:30:03.965358   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.965599   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:30:03.968261   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.968685   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.968720   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.968867   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.971217   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.971529   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.971549   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.971639   37091 provision.go:143] copyHostCerts
	I0717 00:30:03.971663   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:30:03.971726   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:30:03.971745   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:30:03.971812   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:30:03.971911   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:30:03.971939   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:30:03.971948   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:30:03.972001   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:30:03.972058   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:30:03.972075   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:30:03.972081   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:30:03.972106   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:30:03.972159   37091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881 san=[127.0.0.1 192.168.39.238 ha-565881 localhost minikube]
	I0717 00:30:04.115427   37091 provision.go:177] copyRemoteCerts
	I0717 00:30:04.115482   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:30:04.115503   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:04.118744   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.119317   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:04.119347   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.119555   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:04.119745   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.119928   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:04.120090   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:30:04.208734   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:30:04.208802   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:30:04.237408   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:30:04.237489   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:30:04.264010   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:30:04.264070   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:30:04.287879   37091 provision.go:87] duration metric: took 322.51954ms to configureAuth
	I0717 00:30:04.287910   37091 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:30:04.288184   37091 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:30:04.288255   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:04.290649   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.291089   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:04.291116   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.291289   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:04.291470   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.291640   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.291741   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:04.291873   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:04.292044   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:04.292058   37091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:31:35.247731   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:31:35.247757   37091 machine.go:97] duration metric: took 1m31.66325606s to provisionDockerMachine
	I0717 00:31:35.247768   37091 start.go:293] postStartSetup for "ha-565881" (driver="kvm2")
	I0717 00:31:35.247799   37091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:31:35.247824   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.248178   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:31:35.248207   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.251173   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.251605   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.251648   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.251775   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.251956   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.252113   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.252239   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.341073   37091 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:31:35.345318   37091 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:31:35.345349   37091 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:31:35.345409   37091 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:31:35.345487   37091 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:31:35.345496   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:31:35.345577   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:31:35.355014   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:31:35.378321   37091 start.go:296] duration metric: took 130.540009ms for postStartSetup
	I0717 00:31:35.378364   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.378645   37091 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 00:31:35.378668   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.381407   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.381759   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.381777   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.381950   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.382135   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.382269   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.382390   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	W0717 00:31:35.467602   37091 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 00:31:35.467627   37091 fix.go:56] duration metric: took 1m31.904962355s for fixHost
	I0717 00:31:35.467654   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.470742   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.471061   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.471092   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.471293   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.471500   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.471682   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.471811   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.471998   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:31:35.472184   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:31:35.472199   37091 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:31:35.585646   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721176295.539868987
	
	I0717 00:31:35.585669   37091 fix.go:216] guest clock: 1721176295.539868987
	I0717 00:31:35.585675   37091 fix.go:229] Guest: 2024-07-17 00:31:35.539868987 +0000 UTC Remote: 2024-07-17 00:31:35.467636929 +0000 UTC m=+92.028103333 (delta=72.232058ms)
	I0717 00:31:35.585712   37091 fix.go:200] guest clock delta is within tolerance: 72.232058ms
	I0717 00:31:35.585718   37091 start.go:83] releasing machines lock for "ha-565881", held for 1m32.023065415s
	I0717 00:31:35.585737   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.585998   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:31:35.588681   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.589073   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.589105   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.589223   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589658   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589816   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589949   37091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:31:35.590001   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.590075   37091 ssh_runner.go:195] Run: cat /version.json
	I0717 00:31:35.590101   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.592529   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.592811   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.592884   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.592925   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.593058   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.593206   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.593215   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.593229   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.593401   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.593410   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.593555   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.593554   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.593674   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.593812   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.674134   37091 ssh_runner.go:195] Run: systemctl --version
	I0717 00:31:35.702524   37091 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:31:35.860996   37091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:31:35.869782   37091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:31:35.869845   37091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:31:35.878978   37091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 00:31:35.879007   37091 start.go:495] detecting cgroup driver to use...
	I0717 00:31:35.879098   37091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:31:35.895504   37091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:31:35.909937   37091 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:31:35.909986   37091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:31:35.923661   37091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:31:35.937352   37091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:31:36.114537   37091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:31:36.337616   37091 docker.go:233] disabling docker service ...
	I0717 00:31:36.337696   37091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:31:36.368404   37091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:31:36.382665   37091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:31:36.542136   37091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:31:36.694879   37091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:31:36.710588   37091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:31:36.730775   37091 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:31:36.730835   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.742887   37091 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:31:36.742962   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.753720   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.764188   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.774456   37091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:31:36.785055   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.795722   37091 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.806771   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.817066   37091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:31:36.826812   37091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:31:36.836656   37091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:31:36.977073   37091 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:31:46.703564   37091 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.72645615s)
	I0717 00:31:46.703601   37091 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:31:46.703656   37091 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:31:46.708592   37091 start.go:563] Will wait 60s for crictl version
	I0717 00:31:46.708643   37091 ssh_runner.go:195] Run: which crictl
	I0717 00:31:46.712405   37091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:31:46.748919   37091 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:31:46.748989   37091 ssh_runner.go:195] Run: crio --version
	I0717 00:31:46.776791   37091 ssh_runner.go:195] Run: crio --version
	I0717 00:31:46.805919   37091 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:31:46.807247   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:31:46.809680   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:46.810066   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:46.810105   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:46.810335   37091 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:31:46.814801   37091 kubeadm.go:883] updating cluster {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:31:46.814920   37091 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:31:46.814962   37091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:31:46.864570   37091 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:31:46.864592   37091 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:31:46.864662   37091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:31:46.898334   37091 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:31:46.898361   37091 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:31:46.898374   37091 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.2 crio true true} ...
	I0717 00:31:46.898496   37091 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:31:46.898622   37091 ssh_runner.go:195] Run: crio config
	I0717 00:31:46.950419   37091 cni.go:84] Creating CNI manager for ""
	I0717 00:31:46.950449   37091 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:31:46.950466   37091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:31:46.950490   37091 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565881 NodeName:ha-565881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:31:46.950650   37091 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565881"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:31:46.950675   37091 kube-vip.go:115] generating kube-vip config ...
	I0717 00:31:46.950731   37091 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:31:46.962599   37091 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:31:46.962724   37091 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:31:46.962776   37091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:31:46.972441   37091 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:31:46.972515   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:31:46.981722   37091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 00:31:46.998862   37091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:31:47.016994   37091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 00:31:47.040256   37091 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:31:47.056667   37091 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:31:47.061956   37091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:31:47.205261   37091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:31:47.220035   37091 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.238
	I0717 00:31:47.220059   37091 certs.go:194] generating shared ca certs ...
	I0717 00:31:47.220074   37091 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.220232   37091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:31:47.220289   37091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:31:47.220306   37091 certs.go:256] generating profile certs ...
	I0717 00:31:47.220405   37091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:31:47.220439   37091 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d
	I0717 00:31:47.220463   37091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.14 192.168.39.97 192.168.39.254]
	I0717 00:31:47.358180   37091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d ...
	I0717 00:31:47.358210   37091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d: {Name:mkbe0bb2172102aa8c7ea4b23ce0c7fe570174cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.358402   37091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d ...
	I0717 00:31:47.358423   37091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d: {Name:mkbcb38a702d9304a89a7717b83e8333c6851c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.358518   37091 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:31:47.358723   37091 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:31:47.358880   37091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:31:47.358905   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:31:47.358923   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:31:47.358947   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:31:47.358964   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:31:47.358980   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:31:47.358996   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:31:47.359014   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:31:47.359031   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:31:47.359093   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:31:47.359132   37091 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:31:47.359146   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:31:47.359174   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:31:47.359203   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:31:47.359237   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:31:47.359289   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:31:47.359329   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.359349   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.359367   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.359929   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:31:47.386164   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:31:47.410527   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:31:47.434465   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:31:47.456999   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 00:31:47.480811   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:31:47.503411   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:31:47.526710   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:31:47.549885   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:31:47.573543   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:31:47.598119   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:31:47.621760   37091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:31:47.638631   37091 ssh_runner.go:195] Run: openssl version
	I0717 00:31:47.645238   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:31:47.655857   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.660235   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.660292   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.665757   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:31:47.674979   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:31:47.685757   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.689981   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.690028   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.695412   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:31:47.704384   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:31:47.714711   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.718924   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.718961   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.724398   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:31:47.733669   37091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:31:47.737932   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 00:31:47.743392   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 00:31:47.748664   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 00:31:47.753938   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 00:31:47.759225   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 00:31:47.764447   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 00:31:47.769709   37091 kubeadm.go:392] StartCluster: {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:31:47.769816   37091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:31:47.769867   37091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:31:47.806048   37091 cri.go:89] found id: "05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0"
	I0717 00:31:47.806070   37091 cri.go:89] found id: "42119e9324f11f4297cf4f2052d5440773e17236489ca34e1988564acce85cc1"
	I0717 00:31:47.806075   37091 cri.go:89] found id: "8b3db903a1f836c172e85c6e6229a0500c4729281c2733ba22e09d38ec08964b"
	I0717 00:31:47.806079   37091 cri.go:89] found id: "404747229eea4d41bdc771562fc8b910464a0694c31f9ae117eeaec79057382d"
	I0717 00:31:47.806083   37091 cri.go:89] found id: "dcda7fe2ea87d9d0412fd424de512c60b84b972996e99cbd410f5a517bb7bf6a"
	I0717 00:31:47.806087   37091 cri.go:89] found id: "928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519"
	I0717 00:31:47.806091   37091 cri.go:89] found id: "cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c"
	I0717 00:31:47.806095   37091 cri.go:89] found id: "52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146"
	I0717 00:31:47.806099   37091 cri.go:89] found id: "e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f"
	I0717 00:31:47.806106   37091 cri.go:89] found id: "14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5"
	I0717 00:31:47.806111   37091 cri.go:89] found id: "1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6"
	I0717 00:31:47.806115   37091 cri.go:89] found id: "ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36"
	I0717 00:31:47.806120   37091 cri.go:89] found id: "2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c"
	I0717 00:31:47.806127   37091 cri.go:89] found id: "c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff"
	I0717 00:31:47.806132   37091 cri.go:89] found id: ""
	I0717 00:31:47.806177   37091 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.317174681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ddcc96d-df20-421f-8720-6d4048dfa412 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.317549637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721176356567613396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d
197e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d80b5690981eea250dda269acc5562685be31b48b3a961a26ef1b506571436b,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176312655469041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9
a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e1
54d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176312435096590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annot
ations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]strin
g{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa1394
53522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ddcc96d-df20-421f-8720-6d4048dfa412 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.365860539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2d26ca8-3179-4985-bc58-8decb06f9825 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.365949171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2d26ca8-3179-4985-bc58-8decb06f9825 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.367095544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7392cd89-8c5c-4cd2-9c20-5ce4b2c70e5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.367565770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176891367535603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7392cd89-8c5c-4cd2-9c20-5ce4b2c70e5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.368292982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec0289eb-6b92-46ee-b371-1e8e09c26d2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.368364868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec0289eb-6b92-46ee-b371-1e8e09c26d2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.369773702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721176356567613396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d
197e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d80b5690981eea250dda269acc5562685be31b48b3a961a26ef1b506571436b,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176312655469041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9
a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e1
54d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176312435096590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annot
ations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]strin
g{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa1394
53522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec0289eb-6b92-46ee-b371-1e8e09c26d2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.382276210Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=14fd809c-bedd-46d5-bc2f-c4312955a6fc name=/runtime.v1.ImageService/ListImages
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.382839506Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.2],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816 registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d],Size_:117609954,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b],Size_:112194888,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{
Id:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.2],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b],Size_:63051080,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,RepoTags:[registry.k8s.io/kube-proxy:v1.30.2],RepoDigests:[registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961 registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec],Size_:85953433,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 re
gistry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d8
67d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,RepoTags:[docker.io/kindest/kindnetd:v20240513-cd2ac642],RepoDigests:[docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266 docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8],Size_:65908273,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kub
e-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,RepoTags:[docker.io/kindest/kindnetd:v20240715-585640e9],RepoDigests:[docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115 docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=14fd809c-bedd-46d5-bc2f-c4312955a6fc name=/runtim
e.v1.ImageService/ListImages
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.421329469Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f926d1c-34a7-4ef3-a815-623e9e1f5b91 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.421424024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f926d1c-34a7-4ef3-a815-623e9e1f5b91 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.422460814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a424d82f-4956-4e00-bee2-90ad5d8a0f69 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.423145872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176891423121527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a424d82f-4956-4e00-bee2-90ad5d8a0f69 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.423597892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e96944b3-3df9-442b-8f88-f74c23ab3df4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.423683693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e96944b3-3df9-442b-8f88-f74c23ab3df4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.424174831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721176356567613396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d
197e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d80b5690981eea250dda269acc5562685be31b48b3a961a26ef1b506571436b,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176312655469041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9
a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e1
54d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176312435096590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annot
ations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]strin
g{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa1394
53522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e96944b3-3df9-442b-8f88-f74c23ab3df4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.466462244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=573444be-e8be-4edf-858e-c7f908322a6a name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.466572673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=573444be-e8be-4edf-858e-c7f908322a6a name=/runtime.v1.RuntimeService/Version
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.467583364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f045e35-7507-44c3-ae5c-74fcf145eb04 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.468261087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176891468237561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f045e35-7507-44c3-ae5c-74fcf145eb04 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.468897650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25f26bd5-b2f8-4e0e-b895-626aa8b5ddd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.469021236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25f26bd5-b2f8-4e0e-b895-626aa8b5ddd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:41:31 ha-565881 crio[3887]: time="2024-07-17 00:41:31.469393619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721176356567613396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d
197e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d80b5690981eea250dda269acc5562685be31b48b3a961a26ef1b506571436b,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176312655469041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kub
ernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9
a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e1
54d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176312435096590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annot
ations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]strin
g{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa1394
53522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25f26bd5-b2f8-4e0e-b895-626aa8b5ddd5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16cb08b90a179       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       4                   002ff42b3204b       storage-provisioner
	afd50ddb3c371       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      8 minutes ago       Running             kube-apiserver            3                   03f0287dade77       kube-apiserver-ha-565881
	1293602792aa7       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      8 minutes ago       Running             kube-controller-manager   2                   be9e8898804ae       kube-controller-manager-ha-565881
	d5d1de2fa4b27       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      9 minutes ago       Running             busybox                   1                   6291ee1cd24ee       busybox-fc5497c4f-sxdsp
	c56dc46091fa9       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   249e7577f5374       kube-vip-ha-565881
	6e4b939607467       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   2                   455c360925911       coredns-7db6d8ff4d-xftzx
	02a737bcd9b5f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      9 minutes ago       Running             kube-proxy                1                   bc60e96519276       kube-proxy-7p2jl
	410067f28bfdb       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      9 minutes ago       Running             kindnet-cni               1                   9ba19e8f07eab       kindnet-5lrdt
	7d80b5690981e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       3                   002ff42b3204b       storage-provisioner
	7a7dd9858b20e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   1                   a5da1d6907439       coredns-7db6d8ff4d-7wsqq
	fb316c8a568ce       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Running             etcd                      1                   b18ab0c603ba0       etcd-ha-565881
	85245e283143e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      9 minutes ago       Running             kube-scheduler            1                   2f58179b1c60f       kube-scheduler-ha-565881
	583c9df0a3d19       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      9 minutes ago       Exited              kube-controller-manager   1                   be9e8898804ae       kube-controller-manager-ha-565881
	e24716b903522       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      9 minutes ago       Exited              kube-apiserver            2                   03f0287dade77       kube-apiserver-ha-565881
	05847440b65b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   1                   3acd7d3f5c21f       coredns-7db6d8ff4d-xftzx
	28b495a055524       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   18 minutes ago      Exited              busybox                   0                   e0bd927bf2760       busybox-fc5497c4f-sxdsp
	928ee85bf546b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Exited              coredns                   0                   f688446a5f59c       coredns-7db6d8ff4d-7wsqq
	52b45808cde82       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    20 minutes ago      Exited              kindnet-cni               0                   5c5494014c8b1       kindnet-5lrdt
	e572bb9aec2e8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      20 minutes ago      Exited              kube-proxy                0                   12f43031f4b04       kube-proxy-7p2jl
	1ec015ce8f841       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      20 minutes ago      Exited              kube-scheduler            0                   a6e2148781333       kube-scheduler-ha-565881
	ab8577693652f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Exited              etcd                      0                   afbb712100717       etcd-ha-565881
	
	
	==> coredns [05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45091 - 7026 "HINFO IN 1445449914924310106.5846422275679746414. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012221557s
	
	
	==> coredns [6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131] <==
	[INFO] plugin/kubernetes: Trace[719800634]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:32:04.225) (total time: 13562ms):
	Trace[719800634]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33770->10.96.0.1:443: read: connection reset by peer 13562ms (00:32:17.787)
	Trace[719800634]: [13.562660788s] [13.562660788s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33770->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33240->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33216->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:33216->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[693255936]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:32:01.753) (total time: 10001ms):
	Trace[693255936]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:32:11.754)
	Trace[693255936]: [10.001792401s] [10.001792401s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:55726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:55726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519] <==
	[INFO] 10.244.0.4:59609 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005296s
	[INFO] 10.244.0.4:41601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174604s
	[INFO] 10.244.2.2:54282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144175s
	[INFO] 10.244.2.2:33964 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000291713s
	[INFO] 10.244.2.2:38781 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098409s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132708s
	[INFO] 10.244.2.2:42857 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129277s
	[INFO] 10.244.2.2:45518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176537s
	[INFO] 10.244.1.2:38437 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111768s
	[INFO] 10.244.1.2:41860 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000210674s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1916&timeout=7m36s&timeoutSeconds=456&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1941&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1217777566]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:29:52.269) (total time: 10589ms):
	Trace[1217777566]: ---"Objects listed" error:Unauthorized 10588ms (00:30:02.858)
	Trace[1217777566]: [10.589536721s] [10.589536721s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1846856979]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:29:52.635) (total time: 10227ms):
	Trace[1846856979]: ---"Objects listed" error:Unauthorized 10226ms (00:30:02.861)
	Trace[1846856979]: [10.227274956s] [10.227274956s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-565881
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_20_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:20:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:41:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:37:56 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:37:56 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:37:56 +0000   Wed, 17 Jul 2024 00:20:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:37:56 +0000   Wed, 17 Jul 2024 00:21:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-565881
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6899f2542334306bf4c50f49702dfb5
	  System UUID:                c6899f25-4233-4306-bf4c-50f49702dfb5
	  Boot ID:                    f5b041e8-ae19-4f7a-ac0d-a039fbca796b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sxdsp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-7wsqq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 coredns-7db6d8ff4d-xftzx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-ha-565881                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-5lrdt                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-565881             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-ha-565881    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-7p2jl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-565881             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-vip-ha-565881                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 20m                  kube-proxy       
	  Normal   Starting                 8m54s                kube-proxy       
	  Normal   NodeHasSufficientPID     20m                  kubelet          Node ha-565881 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  20m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m                  kubelet          Node ha-565881 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                  kubelet          Node ha-565881 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m                  node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal   NodeReady                20m                  kubelet          Node ha-565881 status is now: NodeReady
	  Normal   RegisteredNode           19m                  node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Warning  ContainerGCFailed        9m52s (x2 over 10m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m46s                node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	  Normal   RegisteredNode           8m40s                node-controller  Node ha-565881 event: Registered Node ha-565881 in Controller
	
	
	Name:               ha-565881-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_21_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:41:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:38:26 +0000   Wed, 17 Jul 2024 00:32:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:38:26 +0000   Wed, 17 Jul 2024 00:32:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:38:26 +0000   Wed, 17 Jul 2024 00:32:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:38:26 +0000   Wed, 17 Jul 2024 00:32:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-565881-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 002cfcb8afdc450f9dbf024dbe1dd968
	  System UUID:                002cfcb8-afdc-450f-9dbf-024dbe1dd968
	  Boot ID:                    09d30567-9ab8-4527-b894-0f75dcd209ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rdpwj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-565881-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-k882n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-565881-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-565881-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-2f9rj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-565881-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-565881-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-565881-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-565881-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet          Node ha-565881-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                    node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           19m                    node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  NodeNotReady             16m                    node-controller  Node ha-565881-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m20s (x8 over 9m20s)  kubelet          Node ha-565881-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m20s (x8 over 9m20s)  kubelet          Node ha-565881-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s (x7 over 9m20s)  kubelet          Node ha-565881-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m46s                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	  Normal  RegisteredNode           8m40s                  node-controller  Node ha-565881-m02 event: Registered Node ha-565881-m02 in Controller
	
	
	Name:               ha-565881-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565881-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=ha-565881
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_23_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:23:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565881-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:27:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:33:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:33:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:33:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 00:24:29 +0000   Wed, 17 Jul 2024 00:33:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-565881-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 008ae63d929d475b8bab60c832202ce9
	  System UUID:                008ae63d-929d-475b-8bab-60c832202ce9
	  Boot ID:                    3540bc22-336a-438e-8b63-852810ced32c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-xz7nj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-proxy-p5xml    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  RegisteredNode           17m                node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node ha-565881-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node ha-565881-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node ha-565881-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-565881-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m46s              node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  RegisteredNode           8m40s              node-controller  Node ha-565881-m04 event: Registered Node ha-565881-m04 in Controller
	  Normal  NodeNotReady             8m6s               node-controller  Node ha-565881-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +8.825427] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057593] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065677] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.195559] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.109938] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261884] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129275] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.597572] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.062309] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075955] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.082514] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.034910] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 00:21] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.822749] kauditd_printk_skb: 24 callbacks suppressed
	[Jul17 00:28] kauditd_printk_skb: 1 callbacks suppressed
	[Jul17 00:31] systemd-fstab-generator[3692]: Ignoring "noauto" option for root device
	[  +0.216028] systemd-fstab-generator[3758]: Ignoring "noauto" option for root device
	[  +0.227364] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +0.155492] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[  +0.283345] systemd-fstab-generator[3868]: Ignoring "noauto" option for root device
	[ +10.230037] systemd-fstab-generator[3997]: Ignoring "noauto" option for root device
	[  +0.086916] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.012354] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.429241] kauditd_printk_skb: 73 callbacks suppressed
	[Jul17 00:32] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36] <==
	2024/07/17 00:30:04 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-17T00:30:04.424266Z","caller":"traceutil/trace.go:171","msg":"trace[2084447961] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"568.016613ms","start":"2024-07-17T00:30:03.856238Z","end":"2024-07-17T00:30:04.424254Z","steps":["trace[2084447961] 'agreement among raft nodes before linearized reading'  (duration: 553.842388ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:30:04.429786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:30:03.856232Z","time spent":"573.541514ms","remote":"127.0.0.1:35722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	2024/07/17 00:30:04 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:30:04.576153Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10056697113903918594,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-17T00:30:04.687881Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.238:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:30:04.687938Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.238:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:30:04.688033Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fff3906243738b90","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T00:30:04.68823Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688593Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688783Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.68891Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688995Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.689036Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.689044Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689055Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.68908Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689157Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689409Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689456Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.692456Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-17T00:30:04.692685Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-17T00:30:04.692785Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-565881","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"]}
	
	
	==> etcd [fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5] <==
	{"level":"warn","ts":"2024-07-17T00:41:10.139391Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:10.139466Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:13.419169Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:13.419254Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:14.141412Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:14.141591Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:18.142923Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:18.142977Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:18.421817Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:18.421909Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:22.144102Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.97:2380/version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:22.14417Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9e80507bffdb4d1","error":"Get \"https://192.168.39.97:2380/version\": dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:23.421933Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:41:23.422119Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9e80507bffdb4d1","rtt":"0s","error":"dial tcp 192.168.39.97:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T00:41:23.518897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 switched to configuration voters=(15900606947593249832 18443243650725153680)"}
	{"level":"info","ts":"2024-07-17T00:41:23.520828Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","removed-remote-peer-id":"e9e80507bffdb4d1","removed-remote-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-07-17T00:41:23.520967Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:41:23.521007Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:41:23.521069Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:41:23.521131Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:41:23.521173Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:41:23.52122Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:41:23.521252Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:41:23.521283Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"fff3906243738b90","removed-remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:41:23.521349Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"fff3906243738b90","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"e9e80507bffdb4d1"}
	
	
	==> kernel <==
	 00:41:32 up 21 min,  0 users,  load average: 0.49, 0.44, 0.37
	Linux ha-565881 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e] <==
	I0717 00:40:53.895472       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:41:03.895143       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:41:03.895335       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:41:03.895622       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:41:03.895675       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:41:03.895874       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:41:03.895903       1 main.go:303] handling current node
	I0717 00:41:03.895941       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:41:03.895958       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:41:13.898969       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:41:13.899058       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:41:13.899243       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:41:13.899269       1 main.go:303] handling current node
	I0717 00:41:13.899294       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:41:13.899299       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:41:13.899348       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:41:13.899367       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:41:23.895348       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:41:23.895503       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:41:23.896143       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:41:23.896254       1 main.go:303] handling current node
	I0717 00:41:23.896525       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:41:23.896625       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:41:23.897085       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:41:23.897211       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146] <==
	I0717 00:29:36.730573       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:36.730625       1 main.go:303] handling current node
	I0717 00:29:36.730649       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:36.730656       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:29:36.730916       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:36.730951       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:36.731053       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:36.731081       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:46.727855       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:46.727918       1 main.go:303] handling current node
	I0717 00:29:46.727931       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:46.727937       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:29:46.728154       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:46.728180       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:46.728251       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:46.728270       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:56.722847       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:56.722880       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:56.723110       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:56.723136       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:56.723203       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:56.723223       1 main.go:303] handling current node
	I0717 00:29:56.723239       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:56.723243       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	E0717 00:30:02.872654       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53] <==
	I0717 00:32:38.483238       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0717 00:32:38.483272       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 00:32:38.483288       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 00:32:38.571245       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 00:32:38.575686       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 00:32:38.576230       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 00:32:38.577277       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 00:32:38.577354       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 00:32:38.578167       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 00:32:38.578274       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:32:38.578363       1 policy_source.go:224] refreshing policies
	I0717 00:32:38.580666       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 00:32:38.584193       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 00:32:38.584289       1 aggregator.go:165] initial CRD sync complete...
	I0717 00:32:38.584324       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 00:32:38.584329       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 00:32:38.584335       1 cache.go:39] Caches are synced for autoregister controller
	I0717 00:32:38.585962       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 00:32:38.664363       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0717 00:32:38.713497       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.14]
	I0717 00:32:38.715905       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:32:38.726599       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 00:32:38.738160       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 00:32:39.488518       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 00:32:39.957907       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.14 192.168.39.238]
	
	
	==> kube-apiserver [e24716b903522f117c90b08bdcedd0af6f5746145b2bac11a85f50f641ed53e2] <==
	I0717 00:31:53.403645       1 options.go:221] external host was not specified, using 192.168.39.238
	I0717 00:31:53.408787       1 server.go:148] Version: v1.30.2
	I0717 00:31:53.408890       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:31:54.031222       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 00:31:54.036627       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:31:54.039836       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 00:31:54.039910       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 00:31:54.040103       1 instance.go:299] Using reconciler: lease
	W0717 00:32:14.030009       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0717 00:32:14.030012       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 00:32:14.041256       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c] <==
	I0717 00:32:52.219876       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:32:57.613255       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-r95fq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-r95fq\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 00:32:57.614215       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d3ba4588-dd39-49d8-9dff-c1b4d5aa821c", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-r95fq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-r95fq": the object has been modified; please apply your changes to the latest version and try again
	I0717 00:32:57.639058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="101.183292ms"
	I0717 00:32:57.683113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.92544ms"
	I0717 00:32:57.683299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.568µs"
	I0717 00:32:59.328956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.966331ms"
	I0717 00:32:59.329054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.845µs"
	I0717 00:33:16.923162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.133862ms"
	I0717 00:33:16.923535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.167µs"
	I0717 00:33:25.046500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.481722ms"
	I0717 00:33:25.047235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.736µs"
	I0717 00:41:20.241402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.867971ms"
	I0717 00:41:20.329302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.34285ms"
	I0717 00:41:20.368629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.596846ms"
	I0717 00:41:20.368878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.464µs"
	I0717 00:41:22.318459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.678µs"
	I0717 00:41:22.500766       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.389µs"
	I0717 00:41:22.517517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.586µs"
	I0717 00:41:22.525042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.892µs"
	E0717 00:41:31.666015       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	E0717 00:41:31.666052       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	E0717 00:41:31.666060       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	E0717 00:41:31.666065       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	E0717 00:41:31.666070       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	
	
	==> kube-controller-manager [583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b] <==
	I0717 00:31:53.451857       1 serving.go:380] Generated self-signed cert in-memory
	I0717 00:31:54.330220       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 00:31:54.330410       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:31:54.332377       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 00:31:54.332563       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 00:31:54.333171       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 00:31:54.333111       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0717 00:32:15.048922       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.238:8443/healthz\": dial tcp 192.168.39.238:8443: connect: connection refused"
	
	
	==> kube-proxy [02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96] <==
	E0717 00:32:18.747232       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565881\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 00:32:37.188740       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565881\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0717 00:32:37.188819       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0717 00:32:37.274295       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:32:37.274381       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:32:37.274407       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:32:37.281013       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:32:37.288028       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:32:37.288061       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:32:37.302361       1 config.go:192] "Starting service config controller"
	I0717 00:32:37.302412       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:32:37.302433       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:32:37.302437       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:32:37.309370       1 config.go:319] "Starting node config controller"
	I0717 00:32:37.309425       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0717 00:32:40.251145       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0717 00:32:40.251120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:32:40.251290       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:32:40.251308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:32:40.251369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:32:40.251382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:32:40.251422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:32:41.302914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:32:41.509914       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:32:41.702629       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f] <==
	E0717 00:28:59.070494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139814       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139854       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.284785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.285574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.285577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:17.500214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:17.500543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:20.571289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:20.571426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:20.571642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:20.572230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:39.005784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:39.006082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:42.075640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:42.075804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:42.075993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:42.076034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6] <==
	W0717 00:29:59.991637       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:29:59.991777       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:30:00.157330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:00.157432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:30:00.277280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:30:00.277385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:30:00.411889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:30:00.411986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:30:00.443474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:30:00.443527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:30:00.612176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:30:00.612230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:30:00.899110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:30:00.899224       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:30:01.520074       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:30:01.520137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:30:01.959591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:30:01.959649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:30:02.057654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:30:02.057750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:30:02.126931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:30:02.126999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:30:02.459997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:02.460096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:04.395180       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832] <==
	W0717 00:32:31.202397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.238:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:31.202482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.238:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:31.486990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.238:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:31.487057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.238:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:31.868585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.238:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:31.868685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.238:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:32.160842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.238:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:32.160995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.238:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:33.000923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.238:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:33.000999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.238:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:33.390403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.238:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:33.390520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.238:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:33.850159       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:33.850264       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:34.946080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:34.946141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:35.143908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.238:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:35.143978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.238:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:35.185931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.238:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:35.186052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.238:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:35.518309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.238:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:35.518457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.238:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:32:35.579179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:32:35.579240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	I0717 00:32:50.859089       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:36:39 ha-565881 kubelet[1370]: E0717 00:36:39.573603    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:36:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:36:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:36:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:36:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:37:39 ha-565881 kubelet[1370]: E0717 00:37:39.573148    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:37:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:37:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:37:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:37:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:38:39 ha-565881 kubelet[1370]: E0717 00:38:39.579086    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:38:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:38:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:38:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:38:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:39:39 ha-565881 kubelet[1370]: E0717 00:39:39.572324    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:39:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:39:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:39:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:39:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:40:39 ha-565881 kubelet[1370]: E0717 00:40:39.577783    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:40:39 ha-565881 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:40:39 ha-565881 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:40:39 ha-565881 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:40:39 ha-565881 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 00:41:31.064880   40133 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19265-12897/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565881 -n ha-565881
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565881 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-qd26c etcd-ha-565881-m03 kube-controller-manager-ha-565881-m03 kube-scheduler-ha-565881-m03 kube-vip-ha-565881-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-565881 describe pod busybox-fc5497c4f-qd26c etcd-ha-565881-m03 kube-controller-manager-ha-565881-m03 kube-scheduler-ha-565881-m03 kube-vip-ha-565881-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-565881 describe pod busybox-fc5497c4f-qd26c etcd-ha-565881-m03 kube-controller-manager-ha-565881-m03 kube-scheduler-ha-565881-m03 kube-vip-ha-565881-m03: exit status 1 (77.442085ms)

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-qd26c
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s4whj (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-s4whj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-565881-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-565881-m03" not found
	Error from server (NotFound): pods "kube-scheduler-ha-565881-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-565881-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-565881 describe pod busybox-fc5497c4f-qd26c etcd-ha-565881-m03 kube-controller-manager-ha-565881-m03 kube-scheduler-ha-565881-m03 kube-vip-ha-565881-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (13.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (173.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 stop -v=7 --alsologtostderr
E0717 00:42:12.450701   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 stop -v=7 --alsologtostderr: exit status 82 (2m1.699726608s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565881-m04"  ...
	* Stopping node "ha-565881-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:41:33.538894   40253 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:41:33.539149   40253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:33.539158   40253 out.go:304] Setting ErrFile to fd 2...
	I0717 00:41:33.539163   40253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:33.539386   40253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:41:33.539646   40253 out.go:298] Setting JSON to false
	I0717 00:41:33.539730   40253 mustload.go:65] Loading cluster: ha-565881
	I0717 00:41:33.540124   40253 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:41:33.540217   40253 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:41:33.540502   40253 mustload.go:65] Loading cluster: ha-565881
	I0717 00:41:33.540687   40253 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:41:33.540715   40253 stop.go:39] StopHost: ha-565881-m04
	I0717 00:41:33.541198   40253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:33.541243   40253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:33.557864   40253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0717 00:41:33.558339   40253 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:33.558884   40253 main.go:141] libmachine: Using API Version  1
	I0717 00:41:33.558912   40253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:33.559276   40253 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:33.561619   40253 out.go:177] * Stopping node "ha-565881-m04"  ...
	I0717 00:41:33.563302   40253 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:41:33.563333   40253 main.go:141] libmachine: (ha-565881-m04) Calling .DriverName
	I0717 00:41:33.563566   40253 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:41:33.563590   40253 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:41:33.565233   40253 retry.go:31] will retry after 272.96767ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0717 00:41:33.838779   40253 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:41:33.840720   40253 retry.go:31] will retry after 286.743174ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0717 00:41:34.128033   40253 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	I0717 00:41:34.129744   40253 retry.go:31] will retry after 627.604538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0717 00:41:34.757494   40253 main.go:141] libmachine: (ha-565881-m04) Calling .GetSSHHostname
	W0717 00:41:34.759036   40253 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0717 00:41:34.759077   40253 main.go:141] libmachine: Stopping "ha-565881-m04"...
	I0717 00:41:34.759087   40253 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:41:34.760072   40253 stop.go:66] stop err: Machine "ha-565881-m04" is already stopped.
	I0717 00:41:34.760092   40253 stop.go:69] host is already stopped
	I0717 00:41:34.760102   40253 stop.go:39] StopHost: ha-565881-m02
	I0717 00:41:34.760437   40253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:34.760481   40253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:34.776059   40253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0717 00:41:34.776640   40253 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:34.777149   40253 main.go:141] libmachine: Using API Version  1
	I0717 00:41:34.777171   40253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:34.777508   40253 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:34.779823   40253 out.go:177] * Stopping node "ha-565881-m02"  ...
	I0717 00:41:34.781110   40253 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:41:34.781138   40253 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:41:34.781388   40253 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:41:34.781411   40253 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:41:34.784552   40253 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:41:34.785032   40253 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:31:58 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:41:34.785059   40253 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:41:34.785228   40253 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:41:34.785410   40253 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:41:34.785554   40253 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:41:34.785683   40253 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	I0717 00:41:34.879702   40253 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 00:41:34.937388   40253 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 00:41:34.993506   40253 main.go:141] libmachine: Stopping "ha-565881-m02"...
	I0717 00:41:34.993547   40253 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:41:34.995187   40253 main.go:141] libmachine: (ha-565881-m02) Calling .Stop
	I0717 00:41:34.998748   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 0/120
	I0717 00:41:36.000131   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 1/120
	I0717 00:41:37.001598   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 2/120
	I0717 00:41:38.003451   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 3/120
	I0717 00:41:39.004886   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 4/120
	I0717 00:41:40.006862   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 5/120
	I0717 00:41:41.008136   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 6/120
	I0717 00:41:42.009841   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 7/120
	I0717 00:41:43.011095   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 8/120
	I0717 00:41:44.012782   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 9/120
	I0717 00:41:45.014993   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 10/120
	I0717 00:41:46.016493   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 11/120
	I0717 00:41:47.018097   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 12/120
	I0717 00:41:48.019653   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 13/120
	I0717 00:41:49.021418   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 14/120
	I0717 00:41:50.023239   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 15/120
	I0717 00:41:51.024676   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 16/120
	I0717 00:41:52.026169   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 17/120
	I0717 00:41:53.027423   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 18/120
	I0717 00:41:54.028880   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 19/120
	I0717 00:41:55.031140   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 20/120
	I0717 00:41:56.032585   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 21/120
	I0717 00:41:57.034131   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 22/120
	I0717 00:41:58.035489   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 23/120
	I0717 00:41:59.036944   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 24/120
	I0717 00:42:00.038863   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 25/120
	I0717 00:42:01.040503   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 26/120
	I0717 00:42:02.042164   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 27/120
	I0717 00:42:03.043382   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 28/120
	I0717 00:42:04.045015   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 29/120
	I0717 00:42:05.047668   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 30/120
	I0717 00:42:06.049308   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 31/120
	I0717 00:42:07.050712   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 32/120
	I0717 00:42:08.052107   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 33/120
	I0717 00:42:09.053754   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 34/120
	I0717 00:42:10.055725   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 35/120
	I0717 00:42:11.057688   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 36/120
	I0717 00:42:12.060054   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 37/120
	I0717 00:42:13.061499   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 38/120
	I0717 00:42:14.062877   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 39/120
	I0717 00:42:15.064700   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 40/120
	I0717 00:42:16.065884   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 41/120
	I0717 00:42:17.067486   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 42/120
	I0717 00:42:18.068871   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 43/120
	I0717 00:42:19.070413   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 44/120
	I0717 00:42:20.072334   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 45/120
	I0717 00:42:21.073748   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 46/120
	I0717 00:42:22.075593   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 47/120
	I0717 00:42:23.077143   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 48/120
	I0717 00:42:24.079684   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 49/120
	I0717 00:42:25.081591   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 50/120
	I0717 00:42:26.083006   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 51/120
	I0717 00:42:27.084248   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 52/120
	I0717 00:42:28.085646   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 53/120
	I0717 00:42:29.086898   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 54/120
	I0717 00:42:30.088479   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 55/120
	I0717 00:42:31.089729   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 56/120
	I0717 00:42:32.091086   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 57/120
	I0717 00:42:33.092370   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 58/120
	I0717 00:42:34.093556   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 59/120
	I0717 00:42:35.095179   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 60/120
	I0717 00:42:36.096681   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 61/120
	I0717 00:42:37.097938   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 62/120
	I0717 00:42:38.099404   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 63/120
	I0717 00:42:39.100933   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 64/120
	I0717 00:42:40.102614   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 65/120
	I0717 00:42:41.104297   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 66/120
	I0717 00:42:42.105728   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 67/120
	I0717 00:42:43.107115   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 68/120
	I0717 00:42:44.108466   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 69/120
	I0717 00:42:45.110004   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 70/120
	I0717 00:42:46.111420   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 71/120
	I0717 00:42:47.112951   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 72/120
	I0717 00:42:48.114313   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 73/120
	I0717 00:42:49.115650   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 74/120
	I0717 00:42:50.117511   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 75/120
	I0717 00:42:51.118872   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 76/120
	I0717 00:42:52.120465   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 77/120
	I0717 00:42:53.121783   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 78/120
	I0717 00:42:54.123462   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 79/120
	I0717 00:42:55.125337   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 80/120
	I0717 00:42:56.126893   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 81/120
	I0717 00:42:57.128867   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 82/120
	I0717 00:42:58.130298   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 83/120
	I0717 00:42:59.132211   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 84/120
	I0717 00:43:00.134027   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 85/120
	I0717 00:43:01.135208   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 86/120
	I0717 00:43:02.136666   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 87/120
	I0717 00:43:03.138003   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 88/120
	I0717 00:43:04.139505   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 89/120
	I0717 00:43:05.141441   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 90/120
	I0717 00:43:06.143032   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 91/120
	I0717 00:43:07.144544   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 92/120
	I0717 00:43:08.145899   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 93/120
	I0717 00:43:09.147299   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 94/120
	I0717 00:43:10.149303   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 95/120
	I0717 00:43:11.150790   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 96/120
	I0717 00:43:12.152173   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 97/120
	I0717 00:43:13.153825   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 98/120
	I0717 00:43:14.155731   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 99/120
	I0717 00:43:15.157410   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 100/120
	I0717 00:43:16.159062   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 101/120
	I0717 00:43:17.160482   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 102/120
	I0717 00:43:18.161947   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 103/120
	I0717 00:43:19.163637   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 104/120
	I0717 00:43:20.165535   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 105/120
	I0717 00:43:21.166892   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 106/120
	I0717 00:43:22.168490   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 107/120
	I0717 00:43:23.170496   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 108/120
	I0717 00:43:24.171829   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 109/120
	I0717 00:43:25.173692   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 110/120
	I0717 00:43:26.175041   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 111/120
	I0717 00:43:27.176649   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 112/120
	I0717 00:43:28.177990   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 113/120
	I0717 00:43:29.179736   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 114/120
	I0717 00:43:30.181737   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 115/120
	I0717 00:43:31.183904   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 116/120
	I0717 00:43:32.185199   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 117/120
	I0717 00:43:33.186646   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 118/120
	I0717 00:43:34.188085   40253 main.go:141] libmachine: (ha-565881-m02) Waiting for machine to stop 119/120
	I0717 00:43:35.188659   40253 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 00:43:35.188720   40253 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 00:43:35.190783   40253 out.go:177] 
	W0717 00:43:35.192188   40253 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 00:43:35.192203   40253 out.go:239] * 
	* 
	W0717 00:43:35.195330   40253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 00:43:35.196823   40253 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-565881 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr: exit status 7 (34.179270574s)

                                                
                                                
-- stdout --
	ha-565881
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-565881-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565881-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:43:35.240701   40718 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:43:35.240834   40718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:43:35.240843   40718 out.go:304] Setting ErrFile to fd 2...
	I0717 00:43:35.240849   40718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:43:35.241599   40718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:43:35.242202   40718 out.go:298] Setting JSON to false
	I0717 00:43:35.242235   40718 mustload.go:65] Loading cluster: ha-565881
	I0717 00:43:35.242342   40718 notify.go:220] Checking for updates...
	I0717 00:43:35.242684   40718 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:43:35.242699   40718 status.go:255] checking status of ha-565881 ...
	I0717 00:43:35.243100   40718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:43:35.243155   40718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:43:35.263161   40718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44565
	I0717 00:43:35.263578   40718 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:43:35.264184   40718 main.go:141] libmachine: Using API Version  1
	I0717 00:43:35.264202   40718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:43:35.264672   40718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:43:35.264887   40718 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:43:35.266650   40718 status.go:330] ha-565881 host status = "Running" (err=<nil>)
	I0717 00:43:35.266666   40718 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:43:35.266944   40718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:43:35.266990   40718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:43:35.281991   40718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0717 00:43:35.282370   40718 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:43:35.282874   40718 main.go:141] libmachine: Using API Version  1
	I0717 00:43:35.282894   40718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:43:35.283199   40718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:43:35.283399   40718 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:43:35.285894   40718 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:43:35.286268   40718 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:43:35.286301   40718 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:43:35.286447   40718 host.go:66] Checking if "ha-565881" exists ...
	I0717 00:43:35.286869   40718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:43:35.286921   40718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:43:35.302179   40718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33447
	I0717 00:43:35.302559   40718 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:43:35.302999   40718 main.go:141] libmachine: Using API Version  1
	I0717 00:43:35.303018   40718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:43:35.303410   40718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:43:35.303628   40718 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:43:35.303825   40718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:43:35.303865   40718 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:43:35.306627   40718 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:43:35.307035   40718 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:43:35.307069   40718 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:43:35.307219   40718 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:43:35.307393   40718 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:43:35.307531   40718 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:43:35.307671   40718 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:43:35.394310   40718 ssh_runner.go:195] Run: systemctl --version
	I0717 00:43:35.404686   40718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:43:35.420185   40718 kubeconfig.go:125] found "ha-565881" server: "https://192.168.39.254:8443"
	I0717 00:43:35.420214   40718 api_server.go:166] Checking apiserver status ...
	I0717 00:43:35.420251   40718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:43:35.436095   40718 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6108/cgroup
	W0717 00:43:35.446756   40718 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:43:35.446806   40718 ssh_runner.go:195] Run: ls
	I0717 00:43:35.451334   40718 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:43:38.512939   40718 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:43:38.512984   40718 retry.go:31] will retry after 307.420698ms: state is "Stopped"
	I0717 00:43:38.821516   40718 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:43:41.584873   40718 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:43:41.584913   40718 retry.go:31] will retry after 365.933621ms: state is "Stopped"
	I0717 00:43:41.951510   40718 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:43:44.656862   40718 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:43:44.656906   40718 retry.go:31] will retry after 345.320053ms: state is "Stopped"
	I0717 00:43:45.002441   40718 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:43:47.729028   40718 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:43:47.729070   40718 retry.go:31] will retry after 369.348393ms: state is "Stopped"
	I0717 00:43:48.098574   40718 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:43:50.800870   40718 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:43:50.800928   40718 status.go:422] ha-565881 apiserver status = Running (err=<nil>)
	I0717 00:43:50.800935   40718 status.go:257] ha-565881 status: &{Name:ha-565881 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:43:50.800974   40718 status.go:255] checking status of ha-565881-m02 ...
	I0717 00:43:50.801302   40718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:43:50.801341   40718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:43:50.816138   40718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0717 00:43:50.816617   40718 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:43:50.817092   40718 main.go:141] libmachine: Using API Version  1
	I0717 00:43:50.817111   40718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:43:50.817394   40718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:43:50.817568   40718 main.go:141] libmachine: (ha-565881-m02) Calling .GetState
	I0717 00:43:50.819060   40718 status.go:330] ha-565881-m02 host status = "Running" (err=<nil>)
	I0717 00:43:50.819077   40718 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:43:50.819355   40718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:43:50.819385   40718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:43:50.833242   40718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I0717 00:43:50.833600   40718 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:43:50.834035   40718 main.go:141] libmachine: Using API Version  1
	I0717 00:43:50.834057   40718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:43:50.834334   40718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:43:50.834530   40718 main.go:141] libmachine: (ha-565881-m02) Calling .GetIP
	I0717 00:43:50.837266   40718 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:43:50.837664   40718 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:31:58 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:43:50.837686   40718 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:43:50.837898   40718 host.go:66] Checking if "ha-565881-m02" exists ...
	I0717 00:43:50.838196   40718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:43:50.838228   40718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:43:50.852248   40718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I0717 00:43:50.852726   40718 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:43:50.853242   40718 main.go:141] libmachine: Using API Version  1
	I0717 00:43:50.853259   40718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:43:50.853593   40718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:43:50.853798   40718 main.go:141] libmachine: (ha-565881-m02) Calling .DriverName
	I0717 00:43:50.854025   40718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:43:50.854061   40718 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHHostname
	I0717 00:43:50.856734   40718 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:43:50.857220   40718 main.go:141] libmachine: (ha-565881-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:b5:c3", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:31:58 +0000 UTC Type:0 Mac:52:54:00:10:b5:c3 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-565881-m02 Clientid:01:52:54:00:10:b5:c3}
	I0717 00:43:50.857244   40718 main.go:141] libmachine: (ha-565881-m02) DBG | domain ha-565881-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:10:b5:c3 in network mk-ha-565881
	I0717 00:43:50.857353   40718 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHPort
	I0717 00:43:50.857514   40718 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHKeyPath
	I0717 00:43:50.857642   40718 main.go:141] libmachine: (ha-565881-m02) Calling .GetSSHUsername
	I0717 00:43:50.857792   40718 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881-m02/id_rsa Username:docker}
	W0717 00:44:09.360814   40718 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0717 00:44:09.360917   40718 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0717 00:44:09.360937   40718 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:44:09.360947   40718 status.go:257] ha-565881-m02 status: &{Name:ha-565881-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:44:09.360971   40718 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0717 00:44:09.360983   40718 status.go:255] checking status of ha-565881-m04 ...
	I0717 00:44:09.361302   40718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:09.361372   40718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:09.375656   40718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0717 00:44:09.376131   40718 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:09.376603   40718 main.go:141] libmachine: Using API Version  1
	I0717 00:44:09.376625   40718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:09.376928   40718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:09.377113   40718 main.go:141] libmachine: (ha-565881-m04) Calling .GetState
	I0717 00:44:09.378460   40718 status.go:330] ha-565881-m04 host status = "Stopped" (err=<nil>)
	I0717 00:44:09.378470   40718 status.go:343] host is not running, skipping remaining checks
	I0717 00:44:09.378476   40718 status.go:257] ha-565881-m04 status: &{Name:ha-565881-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr": ha-565881
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-565881-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-565881-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr": ha-565881
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-565881-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-565881-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr": ha-565881
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-565881-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-565881-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565881 -n ha-565881
E0717 00:44:18.740811   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-565881 -n ha-565881: exit status 2 (15.597334686s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565881 logs -n 25: (1.388622915s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m04 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp testdata/cp-test.txt                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881:/home/docker/cp-test_ha-565881-m04_ha-565881.txt                      |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881 sudo cat                                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881.txt                                |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m02:/home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m02 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m03:/home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n                                                                | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | ha-565881-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-565881 ssh -n ha-565881-m03 sudo cat                                         | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	|         | /home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-565881 node stop m02 -v=7                                                    | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-565881 node start m02 -v=7                                                   | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:27 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-565881 -v=7                                                          | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-565881 -v=7                                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-565881 --wait=true -v=7                                                   | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-565881                                                               | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	| node    | ha-565881 node delete m03 -v=7                                                  | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-565881 stop -v=7                                                             | ha-565881 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:30:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:30:03.472958   37091 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:30:03.473178   37091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:30:03.473186   37091 out.go:304] Setting ErrFile to fd 2...
	I0717 00:30:03.473190   37091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:30:03.473344   37091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:30:03.473853   37091 out.go:298] Setting JSON to false
	I0717 00:30:03.474716   37091 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4352,"bootTime":1721171851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:30:03.474771   37091 start.go:139] virtualization: kvm guest
	I0717 00:30:03.477060   37091 out.go:177] * [ha-565881] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:30:03.478329   37091 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:30:03.478403   37091 notify.go:220] Checking for updates...
	I0717 00:30:03.480995   37091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:30:03.482344   37091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:30:03.483547   37091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:30:03.484814   37091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:30:03.485998   37091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:30:03.487571   37091 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:30:03.487666   37091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:30:03.488110   37091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:30:03.488183   37091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:30:03.502769   37091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0717 00:30:03.503194   37091 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:30:03.503743   37091 main.go:141] libmachine: Using API Version  1
	I0717 00:30:03.503765   37091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:30:03.504103   37091 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:30:03.504301   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.541510   37091 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:30:03.542844   37091 start.go:297] selected driver: kvm2
	I0717 00:30:03.542856   37091 start.go:901] validating driver "kvm2" against &{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:30:03.543000   37091 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:30:03.543351   37091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:30:03.543431   37091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:30:03.558318   37091 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:30:03.559016   37091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:30:03.559046   37091 cni.go:84] Creating CNI manager for ""
	I0717 00:30:03.559054   37091 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:30:03.559112   37091 start.go:340] cluster config:
	{Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:30:03.559252   37091 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:30:03.561017   37091 out.go:177] * Starting "ha-565881" primary control-plane node in "ha-565881" cluster
	I0717 00:30:03.562183   37091 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:30:03.562210   37091 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:30:03.562219   37091 cache.go:56] Caching tarball of preloaded images
	I0717 00:30:03.562282   37091 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:30:03.562291   37091 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:30:03.562398   37091 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/config.json ...
	I0717 00:30:03.562605   37091 start.go:360] acquireMachinesLock for ha-565881: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:30:03.562643   37091 start.go:364] duration metric: took 22.287µs to acquireMachinesLock for "ha-565881"
	I0717 00:30:03.562657   37091 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:30:03.562665   37091 fix.go:54] fixHost starting: 
	I0717 00:30:03.562913   37091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:30:03.562942   37091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:30:03.577346   37091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0717 00:30:03.577771   37091 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:30:03.578283   37091 main.go:141] libmachine: Using API Version  1
	I0717 00:30:03.578307   37091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:30:03.578612   37091 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:30:03.578778   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.578956   37091 main.go:141] libmachine: (ha-565881) Calling .GetState
	I0717 00:30:03.580457   37091 fix.go:112] recreateIfNeeded on ha-565881: state=Running err=<nil>
	W0717 00:30:03.580473   37091 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:30:03.583293   37091 out.go:177] * Updating the running kvm2 "ha-565881" VM ...
	I0717 00:30:03.584488   37091 machine.go:94] provisionDockerMachine start ...
	I0717 00:30:03.584508   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:30:03.584718   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.586840   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.587288   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.587320   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.587446   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.587598   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.587745   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.587877   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.588058   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.588246   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.588256   37091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:30:03.705684   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:30:03.705712   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.705945   37091 buildroot.go:166] provisioning hostname "ha-565881"
	I0717 00:30:03.705986   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.706223   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.708858   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.709223   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.709249   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.709419   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.709680   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.709842   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.709989   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.710164   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.710330   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.710374   37091 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565881 && echo "ha-565881" | sudo tee /etc/hostname
	I0717 00:30:03.843470   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565881
	
	I0717 00:30:03.843498   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.846412   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.846780   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.846804   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.847036   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:03.847216   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.847358   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:03.847507   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:03.847645   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:03.847802   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:03.847816   37091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565881' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565881/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565881' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:30:03.965266   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:30:03.965298   37091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:30:03.965331   37091 buildroot.go:174] setting up certificates
	I0717 00:30:03.965342   37091 provision.go:84] configureAuth start
	I0717 00:30:03.965358   37091 main.go:141] libmachine: (ha-565881) Calling .GetMachineName
	I0717 00:30:03.965599   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:30:03.968261   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.968685   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.968720   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.968867   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:03.971217   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.971529   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:03.971549   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:03.971639   37091 provision.go:143] copyHostCerts
	I0717 00:30:03.971663   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:30:03.971726   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:30:03.971745   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:30:03.971812   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:30:03.971911   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:30:03.971939   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:30:03.971948   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:30:03.972001   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:30:03.972058   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:30:03.972075   37091 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:30:03.972081   37091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:30:03.972106   37091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:30:03.972159   37091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.ha-565881 san=[127.0.0.1 192.168.39.238 ha-565881 localhost minikube]
	I0717 00:30:04.115427   37091 provision.go:177] copyRemoteCerts
	I0717 00:30:04.115482   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:30:04.115503   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:04.118744   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.119317   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:04.119347   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.119555   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:04.119745   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.119928   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:04.120090   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:30:04.208734   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:30:04.208802   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:30:04.237408   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:30:04.237489   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:30:04.264010   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:30:04.264070   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:30:04.287879   37091 provision.go:87] duration metric: took 322.51954ms to configureAuth
	I0717 00:30:04.287910   37091 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:30:04.288184   37091 config.go:182] Loaded profile config "ha-565881": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:30:04.288255   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:30:04.290649   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.291089   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:30:04.291116   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:30:04.291289   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:30:04.291470   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.291640   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:30:04.291741   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:30:04.291873   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:30:04.292044   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:30:04.292058   37091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:31:35.247731   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:31:35.247757   37091 machine.go:97] duration metric: took 1m31.66325606s to provisionDockerMachine
	I0717 00:31:35.247768   37091 start.go:293] postStartSetup for "ha-565881" (driver="kvm2")
	I0717 00:31:35.247799   37091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:31:35.247824   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.248178   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:31:35.248207   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.251173   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.251605   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.251648   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.251775   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.251956   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.252113   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.252239   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.341073   37091 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:31:35.345318   37091 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:31:35.345349   37091 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 00:31:35.345409   37091 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 00:31:35.345487   37091 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 00:31:35.345496   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 00:31:35.345577   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:31:35.355014   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:31:35.378321   37091 start.go:296] duration metric: took 130.540009ms for postStartSetup
	I0717 00:31:35.378364   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.378645   37091 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 00:31:35.378668   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.381407   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.381759   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.381777   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.381950   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.382135   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.382269   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.382390   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	W0717 00:31:35.467602   37091 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 00:31:35.467627   37091 fix.go:56] duration metric: took 1m31.904962355s for fixHost
	I0717 00:31:35.467654   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.470742   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.471061   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.471092   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.471293   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.471500   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.471682   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.471811   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.471998   37091 main.go:141] libmachine: Using SSH client type: native
	I0717 00:31:35.472184   37091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0717 00:31:35.472199   37091 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:31:35.585646   37091 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721176295.539868987
	
	I0717 00:31:35.585669   37091 fix.go:216] guest clock: 1721176295.539868987
	I0717 00:31:35.585675   37091 fix.go:229] Guest: 2024-07-17 00:31:35.539868987 +0000 UTC Remote: 2024-07-17 00:31:35.467636929 +0000 UTC m=+92.028103333 (delta=72.232058ms)
	I0717 00:31:35.585712   37091 fix.go:200] guest clock delta is within tolerance: 72.232058ms
	I0717 00:31:35.585718   37091 start.go:83] releasing machines lock for "ha-565881", held for 1m32.023065415s
	I0717 00:31:35.585737   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.585998   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:31:35.588681   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.589073   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.589105   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.589223   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589658   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589816   37091 main.go:141] libmachine: (ha-565881) Calling .DriverName
	I0717 00:31:35.589949   37091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:31:35.590001   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.590075   37091 ssh_runner.go:195] Run: cat /version.json
	I0717 00:31:35.590101   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHHostname
	I0717 00:31:35.592529   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.592811   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.592884   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.592925   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.593058   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.593206   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:35.593215   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.593229   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:35.593401   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHPort
	I0717 00:31:35.593410   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.593555   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHKeyPath
	I0717 00:31:35.593554   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.593674   37091 main.go:141] libmachine: (ha-565881) Calling .GetSSHUsername
	I0717 00:31:35.593812   37091 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/ha-565881/id_rsa Username:docker}
	I0717 00:31:35.674134   37091 ssh_runner.go:195] Run: systemctl --version
	I0717 00:31:35.702524   37091 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:31:35.860996   37091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:31:35.869782   37091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:31:35.869845   37091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:31:35.878978   37091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 00:31:35.879007   37091 start.go:495] detecting cgroup driver to use...
	I0717 00:31:35.879098   37091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:31:35.895504   37091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:31:35.909937   37091 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:31:35.909986   37091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:31:35.923661   37091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:31:35.937352   37091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:31:36.114537   37091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:31:36.337616   37091 docker.go:233] disabling docker service ...
	I0717 00:31:36.337696   37091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:31:36.368404   37091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:31:36.382665   37091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:31:36.542136   37091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:31:36.694879   37091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:31:36.710588   37091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:31:36.730775   37091 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:31:36.730835   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.742887   37091 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:31:36.742962   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.753720   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.764188   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.774456   37091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:31:36.785055   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.795722   37091 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.806771   37091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:31:36.817066   37091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:31:36.826812   37091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:31:36.836656   37091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:31:36.977073   37091 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:31:46.703564   37091 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.72645615s)
	I0717 00:31:46.703601   37091 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:31:46.703656   37091 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:31:46.708592   37091 start.go:563] Will wait 60s for crictl version
	I0717 00:31:46.708643   37091 ssh_runner.go:195] Run: which crictl
	I0717 00:31:46.712405   37091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:31:46.748919   37091 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:31:46.748989   37091 ssh_runner.go:195] Run: crio --version
	I0717 00:31:46.776791   37091 ssh_runner.go:195] Run: crio --version
	I0717 00:31:46.805919   37091 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:31:46.807247   37091 main.go:141] libmachine: (ha-565881) Calling .GetIP
	I0717 00:31:46.809680   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:46.810066   37091 main.go:141] libmachine: (ha-565881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f7:b6", ip: ""} in network mk-ha-565881: {Iface:virbr1 ExpiryTime:2024-07-17 01:20:12 +0000 UTC Type:0 Mac:52:54:00:ff:f7:b6 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-565881 Clientid:01:52:54:00:ff:f7:b6}
	I0717 00:31:46.810105   37091 main.go:141] libmachine: (ha-565881) DBG | domain ha-565881 has defined IP address 192.168.39.238 and MAC address 52:54:00:ff:f7:b6 in network mk-ha-565881
	I0717 00:31:46.810335   37091 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:31:46.814801   37091 kubeadm.go:883] updating cluster {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:31:46.814920   37091 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:31:46.814962   37091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:31:46.864570   37091 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:31:46.864592   37091 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:31:46.864662   37091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:31:46.898334   37091 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:31:46.898361   37091 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:31:46.898374   37091 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.2 crio true true} ...
	I0717 00:31:46.898496   37091 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565881 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:31:46.898622   37091 ssh_runner.go:195] Run: crio config
	I0717 00:31:46.950419   37091 cni.go:84] Creating CNI manager for ""
	I0717 00:31:46.950449   37091 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:31:46.950466   37091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:31:46.950490   37091 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565881 NodeName:ha-565881 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:31:46.950650   37091 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565881"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:31:46.950675   37091 kube-vip.go:115] generating kube-vip config ...
	I0717 00:31:46.950731   37091 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:31:46.962599   37091 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:31:46.962724   37091 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:31:46.962776   37091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:31:46.972441   37091 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:31:46.972515   37091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:31:46.981722   37091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 00:31:46.998862   37091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:31:47.016994   37091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 00:31:47.040256   37091 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:31:47.056667   37091 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:31:47.061956   37091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:31:47.205261   37091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:31:47.220035   37091 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881 for IP: 192.168.39.238
	I0717 00:31:47.220059   37091 certs.go:194] generating shared ca certs ...
	I0717 00:31:47.220074   37091 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.220232   37091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 00:31:47.220289   37091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 00:31:47.220306   37091 certs.go:256] generating profile certs ...
	I0717 00:31:47.220405   37091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/client.key
	I0717 00:31:47.220439   37091 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d
	I0717 00:31:47.220463   37091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.14 192.168.39.97 192.168.39.254]
	I0717 00:31:47.358180   37091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d ...
	I0717 00:31:47.358210   37091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d: {Name:mkbe0bb2172102aa8c7ea4b23ce0c7fe570174cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.358402   37091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d ...
	I0717 00:31:47.358423   37091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d: {Name:mkbcb38a702d9304a89a7717b83e8333c6851c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:31:47.358518   37091 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt.dcff810d -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt
	I0717 00:31:47.358723   37091 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key.dcff810d -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key
	I0717 00:31:47.358880   37091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key
	I0717 00:31:47.358905   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:31:47.358923   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:31:47.358947   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:31:47.358964   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:31:47.358980   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:31:47.358996   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:31:47.359014   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:31:47.359031   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:31:47.359093   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 00:31:47.359132   37091 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 00:31:47.359146   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:31:47.359174   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:31:47.359203   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:31:47.359237   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 00:31:47.359289   37091 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 00:31:47.359329   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.359349   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.359367   37091 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.359929   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:31:47.386164   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:31:47.410527   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:31:47.434465   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:31:47.456999   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 00:31:47.480811   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:31:47.503411   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:31:47.526710   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/ha-565881/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:31:47.549885   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 00:31:47.573543   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:31:47.598119   37091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 00:31:47.621760   37091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:31:47.638631   37091 ssh_runner.go:195] Run: openssl version
	I0717 00:31:47.645238   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 00:31:47.655857   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.660235   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.660292   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 00:31:47.665757   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:31:47.674979   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:31:47.685757   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.689981   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.690028   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:31:47.695412   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:31:47.704384   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 00:31:47.714711   37091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.718924   37091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.718961   37091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 00:31:47.724398   37091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 00:31:47.733669   37091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:31:47.737932   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 00:31:47.743392   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 00:31:47.748664   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 00:31:47.753938   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 00:31:47.759225   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 00:31:47.764447   37091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 00:31:47.769709   37091 kubeadm.go:392] StartCluster: {Name:ha-565881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-565881 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.79 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:31:47.769816   37091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:31:47.769867   37091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:31:47.806048   37091 cri.go:89] found id: "05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0"
	I0717 00:31:47.806070   37091 cri.go:89] found id: "42119e9324f11f4297cf4f2052d5440773e17236489ca34e1988564acce85cc1"
	I0717 00:31:47.806075   37091 cri.go:89] found id: "8b3db903a1f836c172e85c6e6229a0500c4729281c2733ba22e09d38ec08964b"
	I0717 00:31:47.806079   37091 cri.go:89] found id: "404747229eea4d41bdc771562fc8b910464a0694c31f9ae117eeaec79057382d"
	I0717 00:31:47.806083   37091 cri.go:89] found id: "dcda7fe2ea87d9d0412fd424de512c60b84b972996e99cbd410f5a517bb7bf6a"
	I0717 00:31:47.806087   37091 cri.go:89] found id: "928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519"
	I0717 00:31:47.806091   37091 cri.go:89] found id: "cda0c9ceea230512b2466e8e897193ba91f605ffdd18f97cc513b9383712a10c"
	I0717 00:31:47.806095   37091 cri.go:89] found id: "52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146"
	I0717 00:31:47.806099   37091 cri.go:89] found id: "e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f"
	I0717 00:31:47.806106   37091 cri.go:89] found id: "14c44e183ef1f377bf131b0f0b7f0976adbdf72efd90beb01dfa5c8be36324e5"
	I0717 00:31:47.806111   37091 cri.go:89] found id: "1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6"
	I0717 00:31:47.806115   37091 cri.go:89] found id: "ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36"
	I0717 00:31:47.806120   37091 cri.go:89] found id: "2735221f6ad7f4c25f36739d364bdfe3a27763972e0587f57857ee5012dab84c"
	I0717 00:31:47.806127   37091 cri.go:89] found id: "c44889c22020bc2b13dc8cd59e7c6ae2486362e4178446de7a70718a9acf56ff"
	I0717 00:31:47.806132   37091 cri.go:89] found id: ""
	I0717 00:31:47.806177   37091 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.275639967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177065275618018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e6ce433-29d4-446a-b248-8a536a7c47f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.276270122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d73f3b85-9be1-4dcb-82a5-7e7085036d50 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.276330155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d73f3b85-9be1-4dcb-82a5-7e7085036d50 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.276794119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e103c583281da20d2712a934b0cdf7016a38e002a4aad8e5b2f1fe11db5529e,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176998277357512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8442ce063b3c1b00e28ef814055b056881b11710941ec2dbfef83bf7088d574e,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176974565982574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2962046a34c3f92ffa13be485cc4f0a10a87c0b94eb36b9a6b3724997c2f8cb8,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176907033458508,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]string{io.kubernetes.con
tainer.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53
c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f2
3ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d73f3b85-9be1-4dcb-82a5-7e7085036d50 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.322469686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b3c449a-f641-45d2-8df3-a99c5b5c30f2 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.322545863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b3c449a-f641-45d2-8df3-a99c5b5c30f2 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.323661814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6516b99a-81f3-4e64-a5e1-04ef7d77619e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.324326424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177065324294839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6516b99a-81f3-4e64-a5e1-04ef7d77619e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.325017942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7af4c066-c12d-454f-9f51-e232f2d163a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.325096647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7af4c066-c12d-454f-9f51-e232f2d163a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.325547108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e103c583281da20d2712a934b0cdf7016a38e002a4aad8e5b2f1fe11db5529e,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176998277357512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8442ce063b3c1b00e28ef814055b056881b11710941ec2dbfef83bf7088d574e,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176974565982574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2962046a34c3f92ffa13be485cc4f0a10a87c0b94eb36b9a6b3724997c2f8cb8,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176907033458508,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]string{io.kubernetes.con
tainer.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53
c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f2
3ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7af4c066-c12d-454f-9f51-e232f2d163a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.367044877Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9f65306-70f4-4743-9384-c782695ee143 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.367154030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9f65306-70f4-4743-9384-c782695ee143 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.368292276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cae924fc-0cc2-4628-bb6b-113271636d0d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.368858679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177065368831343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cae924fc-0cc2-4628-bb6b-113271636d0d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.369335511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f12196d6-9ec4-4546-9b15-d783e45d4bad name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.369419302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f12196d6-9ec4-4546-9b15-d783e45d4bad name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.369868983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e103c583281da20d2712a934b0cdf7016a38e002a4aad8e5b2f1fe11db5529e,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176998277357512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8442ce063b3c1b00e28ef814055b056881b11710941ec2dbfef83bf7088d574e,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176974565982574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2962046a34c3f92ffa13be485cc4f0a10a87c0b94eb36b9a6b3724997c2f8cb8,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176907033458508,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]string{io.kubernetes.con
tainer.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53
c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f2
3ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f12196d6-9ec4-4546-9b15-d783e45d4bad name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.413236742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a326a33-2457-4dd9-aff0-83b1ee3e1df5 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.413335090Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a326a33-2457-4dd9-aff0-83b1ee3e1df5 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.414657370Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a468be0e-02b3-47b5-b4ed-a7e26bf85253 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.415175806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177065415152207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a468be0e-02b3-47b5-b4ed-a7e26bf85253 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.415928487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=333d1dfa-4505-433f-866c-35d73b09ca1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.416007684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=333d1dfa-4505-433f-866c-35d73b09ca1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:44:25 ha-565881 crio[3887]: time="2024-07-17 00:44:25.416423424Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e103c583281da20d2712a934b0cdf7016a38e002a4aad8e5b2f1fe11db5529e,PodSandboxId:03f0287dade777d5b9b0535bd46ddad42429027a84827912f3609bf5c57656ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176998277357512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 137a148a990fa52e8281e355098ea021,},Annotations:map[string]string{io.kubernetes.container.hash: f86ebdae,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8442ce063b3c1b00e28ef814055b056881b11710941ec2dbfef83bf7088d574e,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176974565982574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2962046a34c3f92ffa13be485cc4f0a10a87c0b94eb36b9a6b3724997c2f8cb8,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721176907033458508,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16cb08b90a1798a1b0decaa10b138dc553746026bcbcbfceef2f14de0a2d0b67,PodSandboxId:002ff42b3204bc5d220770db0c3c6a92940972909f62d44bbaca7585ff571dd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176365582149050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa1050a-43e1-4f7a-a2df-80cafb48e673,},Annotations:map[string]string{io.kubernetes.container.hash: 51319657,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176353576784050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d1de2fa4b27327c0ea0d50f22abea07b3bbeedbeabee25fa6b6925c51cae3c,PodSandboxId:6291ee1cd24eed32e2768981e5933e237015a0217240ae4a2f6f250cda33d6fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721176345821835098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c56dc46091fa9f84d51b7daba191ddb12ee8cbac176d8434cd0a3da5e1a6d53a,PodSandboxId:249e7577f537498da317ce4a00395301c5eafb441b0f821f061ce7da0e3bde20,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1721176326551856835,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a56a7652e75cdb2280ae1925adea5b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131,PodSandboxId:455c3609259116bfb5b20b686f8d2a5d595494f71bd762dbb905c3f00e884b64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176317541402080,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.hash: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96,PodSandboxId:bc60e96519276152aef10c68f24dedda86aa0afe25a4954e53f8ce951fc0e31f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721176312845034877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 1,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e,PodSandboxId:9ba19e8f07eabd1cf7ab258280887d8b7be1fb40897a12464b3fb5972aae684a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721176312685003457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371,PodSandboxId:a5da1d69074397b3b15599402878e7ba3eb9bb2f645757cffee61dc6d331ddfc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176312654345932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5,PodSandboxId:b18ab0c603ba0b0cb73f9af63e61df1e460b2e9e31d15d4b454150782a4dd7d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176312539976574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832,PodSandboxId:2f58179b1c60fec5e3492abb2bdf627d4b4f10645f32058fb7cd53cc8772972b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176312496085530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b,PodSandboxId:be9e8898804ae5f0712818b035e6081538e4923a0e8e40ec926ee9f4405a8803,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176312440024140,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960ed960c6610568e154d20884b393df,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0,PodSandboxId:3acd7d3f5c21f5b11cce8554e291d9295ad5bb823f2fcfe3cc1e870c954ba3b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176296198303734,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xftzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fe6b06-0568-4da7-bd0c-1883bc99995c,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 1489f0c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b495a0555240a07bd8bacb77c1802d30d4955b8e70aac119d8b370dda0b9fc,PodSandboxId:e0bd927bf2760ab675894d134072e9a08267392017a0fac360a5c1192db5f6da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721175803248543045,Labels:map[string]string{io.kubernetes.con
tainer.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-sxdsp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a532a93-0ab1-4911-b7f5-9d85eda2be75,},Annotations:map[string]string{io.kubernetes.container.hash: efe98420,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519,PodSandboxId:f688446a5f59c1b1408ac1bc970cf5eb44767fc889ce3f4f29fba6e848d4efc3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721175667830910521,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-7db6d8ff4d-7wsqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a433e03-decb-405d-82f1-b14a72412c8a,},Annotations:map[string]string{io.kubernetes.container.hash: d056bd63,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146,PodSandboxId:5c5494014c8b1e4657c3fd4ad4b13feba46b6dac06c04917f04a647c1045f3a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721175655675801389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5lrdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd3c879a-726b-40ed-ba4f-897bf43cda26,},Annotations:map[string]string{io.kubernetes.container.hash: af89605,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f,PodSandboxId:12f43031f4b04fbdb3674dd83edbe24f7962d122db4c906e28034fce063ac4d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53
c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721175653514932581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f5aff6-5e99-4cfe-af04-94198e8d9616,},Annotations:map[string]string{io.kubernetes.container.hash: 2d197e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6,PodSandboxId:a6e214878133350dfa81fdac615fe920b4e1b860e7671bd5d2a6f36699a66c7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f2
3ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721175633405426109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b826e45ce780868932f8d9a5a17c6b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36,PodSandboxId:afbb712100717f9b6f68fe42e21c0ad8b0e7b8d2bd9bfe2261c22384399c8d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,
State:CONTAINER_EXITED,CreatedAt:1721175633392545693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565881,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f82fe075280b90a17d8f04a23fc7629,},Annotations:map[string]string{io.kubernetes.container.hash: 302d3b8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=333d1dfa-4505-433f-866c-35d73b09ca1b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3e103c583281d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Exited              kube-apiserver            4                   03f0287dade77       kube-apiserver-ha-565881
	8442ce063b3c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   002ff42b3204b       storage-provisioner
	2962046a34c3f       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  1                   249e7577f5374       kube-vip-ha-565881
	16cb08b90a179       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago       Exited              storage-provisioner       4                   002ff42b3204b       storage-provisioner
	1293602792aa7       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      11 minutes ago       Running             kube-controller-manager   2                   be9e8898804ae       kube-controller-manager-ha-565881
	d5d1de2fa4b27       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      11 minutes ago       Running             busybox                   1                   6291ee1cd24ee       busybox-fc5497c4f-sxdsp
	c56dc46091fa9       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      12 minutes ago       Exited              kube-vip                  0                   249e7577f5374       kube-vip-ha-565881
	6e4b939607467       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Running             coredns                   2                   455c360925911       coredns-7db6d8ff4d-xftzx
	02a737bcd9b5f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      12 minutes ago       Running             kube-proxy                1                   bc60e96519276       kube-proxy-7p2jl
	410067f28bfdb       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      12 minutes ago       Running             kindnet-cni               1                   9ba19e8f07eab       kindnet-5lrdt
	7a7dd9858b20e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Running             coredns                   1                   a5da1d6907439       coredns-7db6d8ff4d-7wsqq
	fb316c8a568ce       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      12 minutes ago       Running             etcd                      1                   b18ab0c603ba0       etcd-ha-565881
	85245e283143e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      12 minutes ago       Running             kube-scheduler            1                   2f58179b1c60f       kube-scheduler-ha-565881
	583c9df0a3d19       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      12 minutes ago       Exited              kube-controller-manager   1                   be9e8898804ae       kube-controller-manager-ha-565881
	05847440b65b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   1                   3acd7d3f5c21f       coredns-7db6d8ff4d-xftzx
	28b495a055524       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   21 minutes ago       Exited              busybox                   0                   e0bd927bf2760       busybox-fc5497c4f-sxdsp
	928ee85bf546b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      23 minutes ago       Exited              coredns                   0                   f688446a5f59c       coredns-7db6d8ff4d-7wsqq
	52b45808cde82       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    23 minutes ago       Exited              kindnet-cni               0                   5c5494014c8b1       kindnet-5lrdt
	e572bb9aec2e8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      23 minutes ago       Exited              kube-proxy                0                   12f43031f4b04       kube-proxy-7p2jl
	1ec015ce8f841       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      23 minutes ago       Exited              kube-scheduler            0                   a6e2148781333       kube-scheduler-ha-565881
	ab8577693652f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      23 minutes ago       Exited              etcd                      0                   afbb712100717       etcd-ha-565881
	
	
	==> coredns [05847440b65b8539938bce85e8f59715c7d3ebe9aae505c99957da2560b380c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45091 - 7026 "HINFO IN 1445449914924310106.5846422275679746414. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012221557s
	
	
	==> coredns [6e4b9396074674ba1c789e7f9eaf3ea89f7321f960a3d1827b143a1f7efc7131] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3409": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3377": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3377": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3409": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3409": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3377": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3377": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3454": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3454": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3409": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3409": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3377": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3377": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [7a7dd9858b20eaab5ce6cbc7b21c8900b2cf2d3d2cacadaea817177b9799f371] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[703209174]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:43:44.175) (total time: 11616ms):
	Trace[703209174]: ---"Objects listed" error:Unauthorized 11616ms (00:43:55.791)
	Trace[703209174]: [11.616416687s] [11.616416687s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[765271377]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:43:43.874) (total time: 11917ms):
	Trace[765271377]: ---"Objects listed" error:Unauthorized 11917ms (00:43:55.792)
	Trace[765271377]: [11.917638086s] [11.917638086s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1742924732]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:43:59.537) (total time: 10263ms):
	Trace[1742924732]: ---"Objects listed" error:Unauthorized 10263ms (00:44:09.800)
	Trace[1742924732]: [10.2633524s] [10.2633524s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3397": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3397": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3407": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3407": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3455": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3455": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [928ee85bf546b1edddbc32b104ed846b43af526f4425dd84e9f6c024fa0cd519] <==
	[INFO] 10.244.0.4:59609 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005296s
	[INFO] 10.244.0.4:41601 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174604s
	[INFO] 10.244.2.2:54282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144175s
	[INFO] 10.244.2.2:33964 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000291713s
	[INFO] 10.244.2.2:38781 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098409s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132708s
	[INFO] 10.244.2.2:42857 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129277s
	[INFO] 10.244.2.2:45518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176537s
	[INFO] 10.244.1.2:38437 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111768s
	[INFO] 10.244.1.2:41860 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000210674s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1916&timeout=7m36s&timeoutSeconds=456&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1941&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1217777566]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:29:52.269) (total time: 10589ms):
	Trace[1217777566]: ---"Objects listed" error:Unauthorized 10588ms (00:30:02.858)
	Trace[1217777566]: [10.589536721s] [10.589536721s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1846856979]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 00:29:52.635) (total time: 10227ms):
	Trace[1846856979]: ---"Objects listed" error:Unauthorized 10226ms (00:30:02.861)
	Trace[1846856979]: [10.227274956s] [10.227274956s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +8.825427] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057593] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065677] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.195559] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.109938] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261884] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129275] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.597572] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.062309] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075955] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.082514] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.034910] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 00:21] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.822749] kauditd_printk_skb: 24 callbacks suppressed
	[Jul17 00:28] kauditd_printk_skb: 1 callbacks suppressed
	[Jul17 00:31] systemd-fstab-generator[3692]: Ignoring "noauto" option for root device
	[  +0.216028] systemd-fstab-generator[3758]: Ignoring "noauto" option for root device
	[  +0.227364] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +0.155492] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[  +0.283345] systemd-fstab-generator[3868]: Ignoring "noauto" option for root device
	[ +10.230037] systemd-fstab-generator[3997]: Ignoring "noauto" option for root device
	[  +0.086916] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.012354] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.429241] kauditd_printk_skb: 73 callbacks suppressed
	[Jul17 00:32] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [ab8577693652ff4c67bbb6255ecc5adf055fe0eb1d901b61d91fcc46bffbab36] <==
	2024/07/17 00:30:04 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-17T00:30:04.424266Z","caller":"traceutil/trace.go:171","msg":"trace[2084447961] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"568.016613ms","start":"2024-07-17T00:30:03.856238Z","end":"2024-07-17T00:30:04.424254Z","steps":["trace[2084447961] 'agreement among raft nodes before linearized reading'  (duration: 553.842388ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:30:04.429786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:30:03.856232Z","time spent":"573.541514ms","remote":"127.0.0.1:35722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	2024/07/17 00:30:04 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:30:04.576153Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10056697113903918594,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-17T00:30:04.687881Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.238:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:30:04.687938Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.238:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:30:04.688033Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fff3906243738b90","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T00:30:04.68823Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688593Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688783Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.68891Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.688995Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.689036Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"dcaa4dc618676428"}
	{"level":"info","ts":"2024-07-17T00:30:04.689044Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689055Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.68908Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689157Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689409Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.689456Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9e80507bffdb4d1"}
	{"level":"info","ts":"2024-07-17T00:30:04.692456Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-17T00:30:04.692685Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-17T00:30:04.692785Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-565881","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"]}
	
	
	==> etcd [fb316c8a568ce246077dcc06686fefd8b528f115d70d6e9a361ec15190a35bf5] <==
	{"level":"info","ts":"2024-07-17T00:44:21.742009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:21.742048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 [logterm: 3, index: 4182] sent MsgPreVote request to dcaa4dc618676428 at term 3"}
	{"level":"warn","ts":"2024-07-17T00:44:21.777541Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10056697114077802288,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-17T00:44:22.278518Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10056697114077802288,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-17T00:44:22.740802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:22.740914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:22.740948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:22.74098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 [logterm: 3, index: 4182] sent MsgPreVote request to dcaa4dc618676428 at term 3"}
	{"level":"warn","ts":"2024-07-17T00:44:22.779166Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10056697114077802288,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-17T00:44:23.280932Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":10056697114077802288,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-17T00:44:23.369466Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dcaa4dc618676428","rtt":"900.231µs","error":"dial tcp 192.168.39.14:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-17T00:44:23.369548Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dcaa4dc618676428","rtt":"10.204255ms","error":"dial tcp 192.168.39.14:2380: i/o timeout"}
	{"level":"info","ts":"2024-07-17T00:44:23.741791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:23.741848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:23.741862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:23.741876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 [logterm: 3, index: 4182] sent MsgPreVote request to dcaa4dc618676428 at term 3"}
	{"level":"warn","ts":"2024-07-17T00:44:23.770533Z","caller":"etcdserver/v3_server.go:909","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:44:24.741162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:24.741286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:24.741323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:24.74138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 [logterm: 3, index: 4182] sent MsgPreVote request to dcaa4dc618676428 at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:25.74092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:25.740974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:25.740988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-07-17T00:44:25.741003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 [logterm: 3, index: 4182] sent MsgPreVote request to dcaa4dc618676428 at term 3"}
	
	
	==> kernel <==
	 00:44:25 up 24 min,  0 users,  load average: 0.34, 0.59, 0.45
	Linux ha-565881 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [410067f28bfdb57b0ad95587650a9d04d8a65ac68ee45d2fb125aad94de7c95e] <==
	I0717 00:43:53.895099       1 main.go:303] handling current node
	I0717 00:43:53.895109       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:43:53.895113       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:44:03.897863       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:44:03.898001       1 main.go:303] handling current node
	I0717 00:44:03.898030       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:44:03.898048       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:44:03.898196       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:44:03.898275       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	W0717 00:44:05.819264       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3523": dial tcp 10.96.0.1:443: connect: no route to host
	E0717 00:44:05.819361       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3523": dial tcp 10.96.0.1:443: connect: no route to host
	I0717 00:44:13.895069       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:44:13.895120       1 main.go:303] handling current node
	I0717 00:44:13.895138       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:44:13.895143       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:44:13.895275       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:44:13.895281       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:44:23.894206       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:44:23.894286       1 main.go:303] handling current node
	I0717 00:44:23.894308       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:44:23.894314       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:44:23.894542       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:44:23.894584       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	W0717 00:44:24.251226       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3523": dial tcp 10.96.0.1:443: connect: no route to host
	E0717 00:44:24.251373       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3523": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [52b45808cde82717d37f9fa2ae8082ad5cf6a166852dbc7568bda29eb1ccf146] <==
	I0717 00:29:36.730573       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:36.730625       1 main.go:303] handling current node
	I0717 00:29:36.730649       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:36.730656       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:29:36.730916       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:36.730951       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:36.731053       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:36.731081       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:46.727855       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:46.727918       1 main.go:303] handling current node
	I0717 00:29:46.727931       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:46.727937       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	I0717 00:29:46.728154       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:46.728180       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:46.728251       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:46.728270       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:56.722847       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 00:29:56.722880       1 main.go:326] Node ha-565881-m03 has CIDR [10.244.2.0/24] 
	I0717 00:29:56.723110       1 main.go:299] Handling node with IPs: map[192.168.39.79:{}]
	I0717 00:29:56.723136       1 main.go:326] Node ha-565881-m04 has CIDR [10.244.3.0/24] 
	I0717 00:29:56.723203       1 main.go:299] Handling node with IPs: map[192.168.39.238:{}]
	I0717 00:29:56.723223       1 main.go:303] handling current node
	I0717 00:29:56.723239       1 main.go:299] Handling node with IPs: map[192.168.39.14:{}]
	I0717 00:29:56.723243       1 main.go:326] Node ha-565881-m02 has CIDR [10.244.1.0/24] 
	E0717 00:30:02.872654       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [3e103c583281da20d2712a934b0cdf7016a38e002a4aad8e5b2f1fe11db5529e] <==
	Trace[785715572]: ---"Objects listed" error:etcdserver: request timed out 10276ms (00:44:09.798)
	Trace[785715572]: [10.276652742s] [10.276652742s] END
	E0717 00:44:09.798973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: etcdserver: request timed out
	I0717 00:44:09.799077       1 trace.go:236] Trace[2001189931]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:4bcf9cdc-8300-4d6b-ba9c-68dabd281ae3,client:127.0.0.1,api-group:apiregistration.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:apiservices,scope:cluster,url:/apis/apiregistration.k8s.io/v1/apiservices,user-agent:kube-apiserver/v1.30.2 (linux/amd64) kubernetes/3968350,verb:LIST (17-Jul-2024 00:44:00.989) (total time: 8809ms):
	Trace[2001189931]: ["List(recursive=true) etcd3" audit-id:4bcf9cdc-8300-4d6b-ba9c-68dabd281ae3,key:/apiregistration.k8s.io/apiservices,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 8809ms (00:44:00.990)]
	Trace[2001189931]: [8.809074464s] [8.809074464s] END
	I0717 00:44:09.799109       1 trace.go:236] Trace[235122803]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:1d750d02-1773-455c-929e-126f275ef3e7,client:127.0.0.1,api-group:admissionregistration.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:validatingwebhookconfigurations,scope:cluster,url:/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations,user-agent:kube-apiserver/v1.30.2 (linux/amd64) kubernetes/3968350,verb:LIST (17-Jul-2024 00:44:02.077) (total time: 7721ms):
	Trace[235122803]: ["List(recursive=true) etcd3" audit-id:1d750d02-1773-455c-929e-126f275ef3e7,key:/validatingwebhookconfigurations,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 7721ms (00:44:02.077)]
	Trace[235122803]: [7.721280626s] [7.721280626s] END
	E0717 00:44:09.798884       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	E0717 00:44:09.798896       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	E0717 00:44:09.798871       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	W0717 00:44:09.799774       1 reflector.go:547] pkg/client/informers/externalversions/factory.go:141: failed to list *v1.APIService: etcdserver: request timed out
	E0717 00:44:09.799806       1 reflector.go:150] pkg/client/informers/externalversions/factory.go:141: Failed to watch *v1.APIService: failed to list *v1.APIService: etcdserver: request timed out
	W0717 00:44:09.799856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingWebhookConfiguration: etcdserver: request timed out
	E0717 00:44:09.799882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: etcdserver: request timed out
	E0717 00:44:16.770548       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	I0717 00:44:16.770777       1 trace.go:236] Trace[1589644859]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7253fdeb-6044-445f-a4cb-116e258c5eeb,client:127.0.0.1,api-group:rbac.authorization.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterroles,scope:cluster,url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:kube-apiserver/v1.30.2 (linux/amd64) kubernetes/3968350,verb:LIST (17-Jul-2024 00:44:02.771) (total time: 13999ms):
	Trace[1589644859]: ["List(recursive=true) etcd3" audit-id:7253fdeb-6044-445f-a4cb-116e258c5eeb,key:/clusterroles,resourceVersion:,resourceVersionMatch:,limit:0,continue: 13999ms (00:44:02.771)]
	Trace[1589644859]: [13.999073585s] [13.999073585s] END
	E0717 00:44:16.770789       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	E0717 00:44:16.771522       1 storage_rbac.go:187] unable to initialize clusterroles: etcdserver: request timed out
	I0717 00:44:16.771799       1 trace.go:236] Trace[1702487092]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:17857b5d-ab19-4489-9b40-1b3faf0fc8ec,client:127.0.0.1,api-group:scheduling.k8s.io,api-version:v1,name:system-node-critical,subresource:,namespace:,protocol:HTTP/2.0,resource:priorityclasses,scope:resource,url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.30.2 (linux/amd64) kubernetes/3968350,verb:GET (17-Jul-2024 00:44:02.771) (total time: 13999ms):
	Trace[1702487092]: [13.999846847s] [13.999846847s] END
	F0717 00:44:16.772055       1 hooks.go:203] PostStartHook "rbac/bootstrap-roles" failed: unable to initialize roles: timed out waiting for the condition
	
	
	==> kube-controller-manager [1293602792aa7c1e3608b5a2b29baded83927982c3f7c1b2bd54bb8c80a59b5c] <==
	E0717 00:44:16.347275       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-565881"
	E0717 00:44:16.347367       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.238:8443/api/v1/nodes/ha-565881\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0717 00:44:16.348312       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0717 00:44:16.518043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0717 00:44:16.518092       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	W0717 00:44:17.852946       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.238:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.238:36100->192.168.39.238:8443: read: connection reset by peer
	W0717 00:44:18.854333       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:44:19.734948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RoleBinding: Get "https://192.168.39.238:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=3545": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:44:19.735038       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: failed to list *v1.RoleBinding: Get "https://192.168.39.238:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=3545": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:44:20.855977       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:44:20.856055       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-565881-m02"
	E0717 00:44:20.856076       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.238:8443/api/v1/nodes/ha-565881-m02\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	E0717 00:44:23.723105       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	E0717 00:44:23.723202       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	E0717 00:44:23.723235       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	E0717 00:44:23.723338       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	E0717 00:44:23.723365       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565881-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565881-m03"
	W0717 00:44:23.723985       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:44:23.851942       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=3545": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:44:23.852024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=3545": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:44:24.224867       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:44:25.226259       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:44:25.857647       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.238:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.238:8443: connect: connection refused
	W0717 00:44:25.993164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingWebhookConfiguration: Get "https://192.168.39.238:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=3545": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:44:25.993233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: Get "https://192.168.39.238:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=3545": dial tcp 192.168.39.238:8443: connect: connection refused
	
	
	==> kube-controller-manager [583c9df0a3d19cfa48d7b3cf52b8574d3202801a7d93d34a7793b63af4ea537b] <==
	I0717 00:31:53.451857       1 serving.go:380] Generated self-signed cert in-memory
	I0717 00:31:54.330220       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 00:31:54.330410       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:31:54.332377       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 00:31:54.332563       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 00:31:54.333171       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 00:31:54.333111       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0717 00:32:15.048922       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.238:8443/healthz\": dial tcp 192.168.39.238:8443: connect: connection refused"
	
	
	==> kube-proxy [02a737bcd9b5f02d2514aabaf98997edc64381f00c3d18b2e2a13e876a00dd96] <==
	E0717 00:42:26.171317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:42:26.171193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3514": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:42:26.171427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3514": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:42:26.171279       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3431": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:42:26.171502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3431": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:42:36.477320       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:42:36.477461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:42:39.548411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3514": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:42:39.548475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3514": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:42:39.548613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3431": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:42:39.548668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3431": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:42:57.980943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:42:57.981063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:43:04.124173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3431": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:43:04.124299       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3431": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:43:07.196311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3514": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:43:07.196417       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3514": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:43:28.700909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:43:28.701251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:43:40.988882       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3514": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:43:40.989034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3514": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:43:53.275780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3431": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:43:53.275983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=3431": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:44:05.565065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:44:05.565174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=3434": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e572bb9aec2e8c1a21ff3db12be1517047eb579038f7d801653565d48c4e5c8f] <==
	E0717 00:28:59.070494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:02.139814       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:02.139854       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.284785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.285574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:08.285577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:08.285769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:17.500214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:17.500543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:20.571289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:20.571426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:20.571642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:20.572230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:39.005784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:39.006082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:42.075640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:42.075804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565881&resourceVersion=1910": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:29:42.075993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:29:42.076034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [1ec015ce8f841a8f95508beb98f8993a0d78a40173076a7c7c80ec3fa67d02a6] <==
	W0717 00:29:59.991637       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:29:59.991777       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:30:00.157330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:00.157432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:30:00.277280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:30:00.277385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:30:00.411889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:30:00.411986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:30:00.443474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:30:00.443527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:30:00.612176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:30:00.612230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:30:00.899110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:30:00.899224       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:30:01.520074       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:30:01.520137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:30:01.959591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:30:01.959649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:30:02.057654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:30:02.057750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:30:02.126931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:30:02.126999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:30:02.459997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:02.460096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:30:04.395180       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [85245e283143e0cd7a410d9d30cdb544dd147a005e43aae60f4823311b9bb832] <==
	E0717 00:43:54.902454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:43:56.170604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:43:56.170773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:43:56.891991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:43:56.892053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:43:58.036890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:43:58.036942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:43:58.880550       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:43:58.880600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:43:58.990804       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:43:58.990901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:43:59.275748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:43:59.275926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:44:01.046765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:44:01.046818       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:44:02.663326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:44:02.663375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:44:02.767048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:44:02.767161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:44:03.420023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:44:03.420191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:44:03.702333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:44:03.702392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:44:26.012227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csistoragecapacities?resourceVersion=3545": dial tcp 192.168.39.238:8443: connect: connection refused
	E0717 00:44:26.012327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csistoragecapacities?resourceVersion=3545": dial tcp 192.168.39.238:8443: connect: connection refused
	
	
	==> kubelet <==
	Jul 17 00:44:05 ha-565881 kubelet[1370]: E0717 00:44:05.564293    1370 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-565881.17e2d8885e332436\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-565881.17e2d8885e332436  kube-system   2104 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-565881,UID:137a148a990fa52e8281e355098ea021,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-565881,},FirstTimestamp:2024-07-17 00:28:07 +0000 UTC,LastTimestamp:2024-07-17 00:41:39.817318489 +0000 UTC m=+1260.426991097,Count:26,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Ser
ies:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-565881,}"
	Jul 17 00:44:05 ha-565881 kubelet[1370]: W0717 00:44:05.564166    1370 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=3466": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 17 00:44:05 ha-565881 kubelet[1370]: E0717 00:44:05.564483    1370 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=3466": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 17 00:44:08 ha-565881 kubelet[1370]: I0717 00:44:08.635130    1370 status_manager.go:853] "Failed to get status for pod" podUID="137a148a990fa52e8281e355098ea021" pod="kube-system/kube-apiserver-ha-565881" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 00:44:08 ha-565881 kubelet[1370]: E0717 00:44:08.635156    1370 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-565881\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565881?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 00:44:11 ha-565881 kubelet[1370]: E0717 00:44:11.707395    1370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-565881?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 17 00:44:11 ha-565881 kubelet[1370]: E0717 00:44:11.707977    1370 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-565881\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565881?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 00:44:11 ha-565881 kubelet[1370]: E0717 00:44:11.708210    1370 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 17 00:44:11 ha-565881 kubelet[1370]: I0717 00:44:11.708080    1370 status_manager.go:853] "Failed to get status for pod" podUID="0aa1050a-43e1-4f7a-a2df-80cafb48e673" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 00:44:14 ha-565881 kubelet[1370]: I0717 00:44:14.779263    1370 status_manager.go:853] "Failed to get status for pod" podUID="0aa1050a-43e1-4f7a-a2df-80cafb48e673" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 00:44:17 ha-565881 kubelet[1370]: I0717 00:44:17.808317    1370 scope.go:117] "RemoveContainer" containerID="afd50ddb3c371671dcdf90746290d6cda31d25cb7e2bf4da6cadf9cd80a3ed53"
	Jul 17 00:44:17 ha-565881 kubelet[1370]: I0717 00:44:17.809874    1370 scope.go:117] "RemoveContainer" containerID="3e103c583281da20d2712a934b0cdf7016a38e002a4aad8e5b2f1fe11db5529e"
	Jul 17 00:44:17 ha-565881 kubelet[1370]: E0717 00:44:17.810556    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-565881_kube-system(137a148a990fa52e8281e355098ea021)\"" pod="kube-system/kube-apiserver-ha-565881" podUID="137a148a990fa52e8281e355098ea021"
	Jul 17 00:44:17 ha-565881 kubelet[1370]: E0717 00:44:17.851011    1370 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-565881.17e2d8885e332436\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-565881.17e2d8885e332436  kube-system   2104 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-565881,UID:137a148a990fa52e8281e355098ea021,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-565881,},FirstTimestamp:2024-07-17 00:28:07 +0000 UTC,LastTimestamp:2024-07-17 00:41:39.817318489 +0000 UTC m=+1260.426991097,Count:26,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Ser
ies:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-565881,}"
	Jul 17 00:44:17 ha-565881 kubelet[1370]: I0717 00:44:17.851266    1370 status_manager.go:853] "Failed to get status for pod" podUID="a56a7652e75cdb2280ae1925adea5b0d" pod="kube-system/kube-vip-ha-565881" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-565881\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 00:44:18 ha-565881 kubelet[1370]: I0717 00:44:18.824216    1370 scope.go:117] "RemoveContainer" containerID="3e103c583281da20d2712a934b0cdf7016a38e002a4aad8e5b2f1fe11db5529e"
	Jul 17 00:44:18 ha-565881 kubelet[1370]: E0717 00:44:18.825231    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-565881_kube-system(137a148a990fa52e8281e355098ea021)\"" pod="kube-system/kube-apiserver-ha-565881" podUID="137a148a990fa52e8281e355098ea021"
	Jul 17 00:44:20 ha-565881 kubelet[1370]: E0717 00:44:20.923211    1370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-565881?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 17 00:44:20 ha-565881 kubelet[1370]: I0717 00:44:20.923435    1370 status_manager.go:853] "Failed to get status for pod" podUID="137a148a990fa52e8281e355098ea021" pod="kube-system/kube-apiserver-ha-565881" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565881\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 00:44:20 ha-565881 kubelet[1370]: W0717 00:44:20.923254    1370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=3545": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 17 00:44:20 ha-565881 kubelet[1370]: E0717 00:44:20.924254    1370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=3545": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 17 00:44:21 ha-565881 kubelet[1370]: I0717 00:44:21.278679    1370 scope.go:117] "RemoveContainer" containerID="3e103c583281da20d2712a934b0cdf7016a38e002a4aad8e5b2f1fe11db5529e"
	Jul 17 00:44:21 ha-565881 kubelet[1370]: E0717 00:44:21.279327    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-565881_kube-system(137a148a990fa52e8281e355098ea021)\"" pod="kube-system/kube-apiserver-ha-565881" podUID="137a148a990fa52e8281e355098ea021"
	Jul 17 00:44:23 ha-565881 kubelet[1370]: E0717 00:44:23.995108    1370 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-565881\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565881?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 00:44:23 ha-565881 kubelet[1370]: I0717 00:44:23.995100    1370 status_manager.go:853] "Failed to get status for pod" podUID="a56a7652e75cdb2280ae1925adea5b0d" pod="kube-system/kube-vip-ha-565881" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-565881\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 00:44:25.041781   40968 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19265-12897/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565881 -n ha-565881
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565881 -n ha-565881: exit status 2 (225.251482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-565881" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (173.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (322.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-905682
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-905682
E0717 00:57:21.784680   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:59:18.739216   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-905682: exit status 82 (2m1.856162941s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-905682-m03"  ...
	* Stopping node "multinode-905682-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-905682" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-905682 --wait=true -v=8 --alsologtostderr
E0717 01:02:12.451222   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-905682 --wait=true -v=8 --alsologtostderr: (3m18.181496479s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-905682
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-905682 -n multinode-905682
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-905682 logs -n 25: (1.467107787s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m02:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2525639886/001/cp-test_multinode-905682-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m02:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682:/home/docker/cp-test_multinode-905682-m02_multinode-905682.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682 sudo cat                                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m02_multinode-905682.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m02:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03:/home/docker/cp-test_multinode-905682-m02_multinode-905682-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682-m03 sudo cat                                   | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m02_multinode-905682-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp testdata/cp-test.txt                                                | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2525639886/001/cp-test_multinode-905682-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682:/home/docker/cp-test_multinode-905682-m03_multinode-905682.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682 sudo cat                                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m03_multinode-905682.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02:/home/docker/cp-test_multinode-905682-m03_multinode-905682-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682-m02 sudo cat                                   | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m03_multinode-905682-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-905682 node stop m03                                                          | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	| node    | multinode-905682 node start                                                             | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-905682                                                                | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:57 UTC |                     |
	| stop    | -p multinode-905682                                                                     | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:57 UTC |                     |
	| start   | -p multinode-905682                                                                     | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:59 UTC | 17 Jul 24 01:02 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-905682                                                                | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 01:02 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:59:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:59:21.416754   49910 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:59:21.417016   49910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:59:21.417025   49910 out.go:304] Setting ErrFile to fd 2...
	I0717 00:59:21.417029   49910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:59:21.417202   49910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:59:21.417762   49910 out.go:298] Setting JSON to false
	I0717 00:59:21.418682   49910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6110,"bootTime":1721171851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:59:21.418745   49910 start.go:139] virtualization: kvm guest
	I0717 00:59:21.421205   49910 out.go:177] * [multinode-905682] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:59:21.422714   49910 notify.go:220] Checking for updates...
	I0717 00:59:21.422741   49910 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:59:21.424073   49910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:59:21.425393   49910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:59:21.427035   49910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:59:21.428494   49910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:59:21.429808   49910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:59:21.431727   49910 config.go:182] Loaded profile config "multinode-905682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:59:21.431809   49910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:59:21.432202   49910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:59:21.432271   49910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:59:21.447285   49910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I0717 00:59:21.447774   49910 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:59:21.448421   49910 main.go:141] libmachine: Using API Version  1
	I0717 00:59:21.448446   49910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:59:21.448855   49910 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:59:21.449107   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 00:59:21.483446   49910 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:59:21.484900   49910 start.go:297] selected driver: kvm2
	I0717 00:59:21.484917   49910 start.go:901] validating driver "kvm2" against &{Name:multinode-905682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-905682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.142 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:59:21.485052   49910 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:59:21.485361   49910 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:59:21.485457   49910 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:59:21.499701   49910 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:59:21.500381   49910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:59:21.500411   49910 cni.go:84] Creating CNI manager for ""
	I0717 00:59:21.500422   49910 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 00:59:21.500509   49910 start.go:340] cluster config:
	{Name:multinode-905682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-905682 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.142 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:59:21.500678   49910 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:59:21.502619   49910 out.go:177] * Starting "multinode-905682" primary control-plane node in "multinode-905682" cluster
	I0717 00:59:21.503881   49910 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:59:21.503918   49910 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:59:21.503928   49910 cache.go:56] Caching tarball of preloaded images
	I0717 00:59:21.504000   49910 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:59:21.504011   49910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:59:21.504136   49910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/config.json ...
	I0717 00:59:21.504312   49910 start.go:360] acquireMachinesLock for multinode-905682: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:59:21.504349   49910 start.go:364] duration metric: took 22.013µs to acquireMachinesLock for "multinode-905682"
	I0717 00:59:21.504362   49910 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:59:21.504371   49910 fix.go:54] fixHost starting: 
	I0717 00:59:21.504654   49910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:59:21.504685   49910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:59:21.518170   49910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0717 00:59:21.518583   49910 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:59:21.519087   49910 main.go:141] libmachine: Using API Version  1
	I0717 00:59:21.519114   49910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:59:21.519398   49910 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:59:21.519554   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 00:59:21.519685   49910 main.go:141] libmachine: (multinode-905682) Calling .GetState
	I0717 00:59:21.521257   49910 fix.go:112] recreateIfNeeded on multinode-905682: state=Running err=<nil>
	W0717 00:59:21.521280   49910 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:59:21.523262   49910 out.go:177] * Updating the running kvm2 "multinode-905682" VM ...
	I0717 00:59:21.524442   49910 machine.go:94] provisionDockerMachine start ...
	I0717 00:59:21.524455   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 00:59:21.524646   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.526885   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.527287   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.527317   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.527472   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:21.527610   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.527767   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.527962   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:21.528216   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 00:59:21.528434   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 00:59:21.528448   49910 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:59:21.641899   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-905682
	
	I0717 00:59:21.641925   49910 main.go:141] libmachine: (multinode-905682) Calling .GetMachineName
	I0717 00:59:21.642151   49910 buildroot.go:166] provisioning hostname "multinode-905682"
	I0717 00:59:21.642176   49910 main.go:141] libmachine: (multinode-905682) Calling .GetMachineName
	I0717 00:59:21.642345   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.645036   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.645409   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.645434   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.645570   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:21.645753   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.645922   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.646089   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:21.646275   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 00:59:21.646434   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 00:59:21.646445   49910 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-905682 && echo "multinode-905682" | sudo tee /etc/hostname
	I0717 00:59:21.772936   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-905682
	
	I0717 00:59:21.772969   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.775470   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.775776   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.775825   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.775915   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:21.776170   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.776351   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.776529   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:21.776725   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 00:59:21.776902   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 00:59:21.776926   49910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-905682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-905682/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-905682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:59:21.881540   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:59:21.881568   49910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:59:21.881590   49910 buildroot.go:174] setting up certificates
	I0717 00:59:21.881600   49910 provision.go:84] configureAuth start
	I0717 00:59:21.881612   49910 main.go:141] libmachine: (multinode-905682) Calling .GetMachineName
	I0717 00:59:21.881871   49910 main.go:141] libmachine: (multinode-905682) Calling .GetIP
	I0717 00:59:21.884541   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.884923   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.884964   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.885123   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.887170   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.887489   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.887511   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.887613   49910 provision.go:143] copyHostCerts
	I0717 00:59:21.887637   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:59:21.887679   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:59:21.887693   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:59:21.887755   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:59:21.887857   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:59:21.887877   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:59:21.887883   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:59:21.887911   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:59:21.887972   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:59:21.887988   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:59:21.887993   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:59:21.888013   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:59:21.888073   49910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.multinode-905682 san=[127.0.0.1 192.168.39.36 localhost minikube multinode-905682]
	I0717 00:59:21.993347   49910 provision.go:177] copyRemoteCerts
	I0717 00:59:21.993413   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:59:21.993438   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.996035   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.996350   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.996388   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.996591   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:21.996745   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.996864   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:21.997035   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 00:59:22.080949   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:59:22.081019   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:59:22.106693   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:59:22.106754   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 00:59:22.131114   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:59:22.131211   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:59:22.160581   49910 provision.go:87] duration metric: took 278.96683ms to configureAuth
	I0717 00:59:22.160610   49910 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:59:22.160817   49910 config.go:182] Loaded profile config "multinode-905682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:59:22.160888   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:22.163989   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:22.164378   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:22.164405   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:22.164599   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:22.164762   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:22.164923   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:22.165098   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:22.165262   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 00:59:22.165424   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 00:59:22.165437   49910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:00:53.059796   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:00:53.059824   49910 machine.go:97] duration metric: took 1m31.535371677s to provisionDockerMachine
	I0717 01:00:53.059836   49910 start.go:293] postStartSetup for "multinode-905682" (driver="kvm2")
	I0717 01:00:53.059849   49910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:00:53.059881   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.060223   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:00:53.060243   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 01:00:53.063159   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.063487   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.063513   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.063608   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 01:00:53.063787   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.063948   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 01:00:53.064086   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 01:00:53.149985   49910 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:00:53.154685   49910 command_runner.go:130] > NAME=Buildroot
	I0717 01:00:53.154710   49910 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 01:00:53.154716   49910 command_runner.go:130] > ID=buildroot
	I0717 01:00:53.154722   49910 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 01:00:53.154729   49910 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 01:00:53.154814   49910 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:00:53.154835   49910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:00:53.154899   49910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:00:53.154965   49910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:00:53.154974   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 01:00:53.155078   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:00:53.165157   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:00:53.190755   49910 start.go:296] duration metric: took 130.903803ms for postStartSetup
	I0717 01:00:53.190817   49910 fix.go:56] duration metric: took 1m31.686446496s for fixHost
	I0717 01:00:53.190845   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 01:00:53.193379   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.193756   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.193792   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.193997   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 01:00:53.194209   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.194372   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.194568   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 01:00:53.194736   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 01:00:53.194906   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 01:00:53.194916   49910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:00:53.297811   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721178053.263704071
	
	I0717 01:00:53.297841   49910 fix.go:216] guest clock: 1721178053.263704071
	I0717 01:00:53.297852   49910 fix.go:229] Guest: 2024-07-17 01:00:53.263704071 +0000 UTC Remote: 2024-07-17 01:00:53.190823267 +0000 UTC m=+91.807199193 (delta=72.880804ms)
	I0717 01:00:53.297881   49910 fix.go:200] guest clock delta is within tolerance: 72.880804ms
	I0717 01:00:53.297891   49910 start.go:83] releasing machines lock for "multinode-905682", held for 1m31.793530968s
	I0717 01:00:53.297923   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.298229   49910 main.go:141] libmachine: (multinode-905682) Calling .GetIP
	I0717 01:00:53.300550   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.300952   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.300982   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.301106   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.301678   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.301835   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.301910   49910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:00:53.301960   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 01:00:53.302062   49910 ssh_runner.go:195] Run: cat /version.json
	I0717 01:00:53.302081   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 01:00:53.304379   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.304707   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.304735   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.304759   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.304900   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 01:00:53.305068   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.305202   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.305224   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 01:00:53.305225   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.305387   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 01:00:53.305448   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 01:00:53.305596   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.305741   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 01:00:53.305896   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 01:00:53.381477   49910 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0717 01:00:53.381635   49910 ssh_runner.go:195] Run: systemctl --version
	I0717 01:00:53.405269   49910 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 01:00:53.405336   49910 command_runner.go:130] > systemd 252 (252)
	I0717 01:00:53.405366   49910 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 01:00:53.405434   49910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:00:53.567925   49910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 01:00:53.575299   49910 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 01:00:53.575607   49910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:00:53.575674   49910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:00:53.584942   49910 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 01:00:53.584964   49910 start.go:495] detecting cgroup driver to use...
	I0717 01:00:53.585041   49910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:00:53.602283   49910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:00:53.616665   49910 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:00:53.616729   49910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:00:53.629988   49910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:00:53.643128   49910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:00:53.789248   49910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:00:53.927259   49910 docker.go:233] disabling docker service ...
	I0717 01:00:53.927340   49910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:00:53.944388   49910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:00:53.958802   49910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:00:54.100243   49910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:00:54.248952   49910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:00:54.263522   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:00:54.281243   49910 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 01:00:54.281628   49910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:00:54.281682   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.291991   49910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:00:54.292054   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.302324   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.312346   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.322384   49910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:00:54.332779   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.343034   49910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.353704   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.370183   49910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:00:54.398161   49910 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 01:00:54.398262   49910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:00:54.407992   49910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:00:54.548336   49910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:00:56.382898   49910 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.834525357s)
	I0717 01:00:56.382934   49910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:00:56.382988   49910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:00:56.387951   49910 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 01:00:56.387975   49910 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 01:00:56.387991   49910 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I0717 01:00:56.388001   49910 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 01:00:56.388009   49910 command_runner.go:130] > Access: 2024-07-17 01:00:56.301662268 +0000
	I0717 01:00:56.388019   49910 command_runner.go:130] > Modify: 2024-07-17 01:00:56.235659640 +0000
	I0717 01:00:56.388029   49910 command_runner.go:130] > Change: 2024-07-17 01:00:56.235659640 +0000
	I0717 01:00:56.388038   49910 command_runner.go:130] >  Birth: -
	I0717 01:00:56.388062   49910 start.go:563] Will wait 60s for crictl version
	I0717 01:00:56.388105   49910 ssh_runner.go:195] Run: which crictl
	I0717 01:00:56.392003   49910 command_runner.go:130] > /usr/bin/crictl
	I0717 01:00:56.392075   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:00:56.431538   49910 command_runner.go:130] > Version:  0.1.0
	I0717 01:00:56.431566   49910 command_runner.go:130] > RuntimeName:  cri-o
	I0717 01:00:56.431574   49910 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0717 01:00:56.431582   49910 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 01:00:56.431616   49910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:00:56.431685   49910 ssh_runner.go:195] Run: crio --version
	I0717 01:00:56.457481   49910 command_runner.go:130] > crio version 1.29.1
	I0717 01:00:56.457500   49910 command_runner.go:130] > Version:        1.29.1
	I0717 01:00:56.457506   49910 command_runner.go:130] > GitCommit:      unknown
	I0717 01:00:56.457510   49910 command_runner.go:130] > GitCommitDate:  unknown
	I0717 01:00:56.457513   49910 command_runner.go:130] > GitTreeState:   clean
	I0717 01:00:56.457520   49910 command_runner.go:130] > BuildDate:      2024-07-15T15:38:42Z
	I0717 01:00:56.457527   49910 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 01:00:56.457533   49910 command_runner.go:130] > Compiler:       gc
	I0717 01:00:56.457540   49910 command_runner.go:130] > Platform:       linux/amd64
	I0717 01:00:56.457547   49910 command_runner.go:130] > Linkmode:       dynamic
	I0717 01:00:56.457554   49910 command_runner.go:130] > BuildTags:      
	I0717 01:00:56.457563   49910 command_runner.go:130] >   containers_image_ostree_stub
	I0717 01:00:56.457567   49910 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 01:00:56.457573   49910 command_runner.go:130] >   btrfs_noversion
	I0717 01:00:56.457595   49910 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 01:00:56.457602   49910 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 01:00:56.457605   49910 command_runner.go:130] >   seccomp
	I0717 01:00:56.457612   49910 command_runner.go:130] > LDFlags:          unknown
	I0717 01:00:56.457618   49910 command_runner.go:130] > SeccompEnabled:   true
	I0717 01:00:56.457628   49910 command_runner.go:130] > AppArmorEnabled:  false
	I0717 01:00:56.458634   49910 ssh_runner.go:195] Run: crio --version
	I0717 01:00:56.486011   49910 command_runner.go:130] > crio version 1.29.1
	I0717 01:00:56.486032   49910 command_runner.go:130] > Version:        1.29.1
	I0717 01:00:56.486040   49910 command_runner.go:130] > GitCommit:      unknown
	I0717 01:00:56.486047   49910 command_runner.go:130] > GitCommitDate:  unknown
	I0717 01:00:56.486054   49910 command_runner.go:130] > GitTreeState:   clean
	I0717 01:00:56.486062   49910 command_runner.go:130] > BuildDate:      2024-07-15T15:38:42Z
	I0717 01:00:56.486066   49910 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 01:00:56.486070   49910 command_runner.go:130] > Compiler:       gc
	I0717 01:00:56.486086   49910 command_runner.go:130] > Platform:       linux/amd64
	I0717 01:00:56.486093   49910 command_runner.go:130] > Linkmode:       dynamic
	I0717 01:00:56.486098   49910 command_runner.go:130] > BuildTags:      
	I0717 01:00:56.486105   49910 command_runner.go:130] >   containers_image_ostree_stub
	I0717 01:00:56.486109   49910 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 01:00:56.486113   49910 command_runner.go:130] >   btrfs_noversion
	I0717 01:00:56.486123   49910 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 01:00:56.486129   49910 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 01:00:56.486133   49910 command_runner.go:130] >   seccomp
	I0717 01:00:56.486140   49910 command_runner.go:130] > LDFlags:          unknown
	I0717 01:00:56.486144   49910 command_runner.go:130] > SeccompEnabled:   true
	I0717 01:00:56.486150   49910 command_runner.go:130] > AppArmorEnabled:  false
	I0717 01:00:56.488148   49910 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:00:56.489426   49910 main.go:141] libmachine: (multinode-905682) Calling .GetIP
	I0717 01:00:56.492003   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:56.492340   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:56.492366   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:56.492580   49910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:00:56.496859   49910 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 01:00:56.496951   49910 kubeadm.go:883] updating cluster {Name:multinode-905682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-905682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.142 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:00:56.497104   49910 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:00:56.497163   49910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:00:56.553748   49910 command_runner.go:130] > {
	I0717 01:00:56.553772   49910 command_runner.go:130] >   "images": [
	I0717 01:00:56.553782   49910 command_runner.go:130] >     {
	I0717 01:00:56.553790   49910 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 01:00:56.553795   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.553801   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 01:00:56.553805   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553809   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.553817   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 01:00:56.553826   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 01:00:56.553829   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553834   49910 command_runner.go:130] >       "size": "65908273",
	I0717 01:00:56.553838   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.553842   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.553847   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.553851   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.553857   49910 command_runner.go:130] >     },
	I0717 01:00:56.553860   49910 command_runner.go:130] >     {
	I0717 01:00:56.553866   49910 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 01:00:56.553872   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.553877   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 01:00:56.553881   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553884   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.553892   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 01:00:56.553905   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 01:00:56.553910   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553914   49910 command_runner.go:130] >       "size": "87165492",
	I0717 01:00:56.553918   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.553928   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.553932   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.553937   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.553940   49910 command_runner.go:130] >     },
	I0717 01:00:56.553943   49910 command_runner.go:130] >     {
	I0717 01:00:56.553949   49910 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 01:00:56.553955   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.553960   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 01:00:56.553963   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553970   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.553981   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 01:00:56.553990   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 01:00:56.553995   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554000   49910 command_runner.go:130] >       "size": "1363676",
	I0717 01:00:56.554006   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.554010   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554013   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554020   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554023   49910 command_runner.go:130] >     },
	I0717 01:00:56.554027   49910 command_runner.go:130] >     {
	I0717 01:00:56.554033   49910 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 01:00:56.554039   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554044   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 01:00:56.554050   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554053   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554062   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 01:00:56.554078   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 01:00:56.554083   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554087   49910 command_runner.go:130] >       "size": "31470524",
	I0717 01:00:56.554093   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.554097   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554102   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554106   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554109   49910 command_runner.go:130] >     },
	I0717 01:00:56.554115   49910 command_runner.go:130] >     {
	I0717 01:00:56.554121   49910 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 01:00:56.554127   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554132   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 01:00:56.554137   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554142   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554150   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 01:00:56.554159   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 01:00:56.554165   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554169   49910 command_runner.go:130] >       "size": "61245718",
	I0717 01:00:56.554175   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.554179   49910 command_runner.go:130] >       "username": "nonroot",
	I0717 01:00:56.554189   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554196   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554199   49910 command_runner.go:130] >     },
	I0717 01:00:56.554203   49910 command_runner.go:130] >     {
	I0717 01:00:56.554209   49910 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 01:00:56.554214   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554219   49910 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 01:00:56.554224   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554228   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554237   49910 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 01:00:56.554245   49910 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 01:00:56.554255   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554261   49910 command_runner.go:130] >       "size": "150779692",
	I0717 01:00:56.554265   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554271   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.554274   49910 command_runner.go:130] >       },
	I0717 01:00:56.554279   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554283   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554289   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554293   49910 command_runner.go:130] >     },
	I0717 01:00:56.554298   49910 command_runner.go:130] >     {
	I0717 01:00:56.554304   49910 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 01:00:56.554310   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554315   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 01:00:56.554320   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554324   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554333   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 01:00:56.554342   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 01:00:56.554347   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554352   49910 command_runner.go:130] >       "size": "117609954",
	I0717 01:00:56.554357   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554361   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.554366   49910 command_runner.go:130] >       },
	I0717 01:00:56.554370   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554373   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554379   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554388   49910 command_runner.go:130] >     },
	I0717 01:00:56.554394   49910 command_runner.go:130] >     {
	I0717 01:00:56.554400   49910 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 01:00:56.554406   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554411   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 01:00:56.554424   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554430   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554449   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 01:00:56.554458   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 01:00:56.554462   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554466   49910 command_runner.go:130] >       "size": "112194888",
	I0717 01:00:56.554471   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554475   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.554480   49910 command_runner.go:130] >       },
	I0717 01:00:56.554484   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554488   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554491   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554494   49910 command_runner.go:130] >     },
	I0717 01:00:56.554497   49910 command_runner.go:130] >     {
	I0717 01:00:56.554502   49910 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 01:00:56.554506   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554510   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 01:00:56.554513   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554517   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554523   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 01:00:56.554529   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 01:00:56.554532   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554536   49910 command_runner.go:130] >       "size": "85953433",
	I0717 01:00:56.554540   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.554543   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554546   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554550   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554553   49910 command_runner.go:130] >     },
	I0717 01:00:56.554558   49910 command_runner.go:130] >     {
	I0717 01:00:56.554564   49910 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 01:00:56.554570   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554579   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 01:00:56.554585   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554588   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554598   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 01:00:56.554607   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 01:00:56.554612   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554616   49910 command_runner.go:130] >       "size": "63051080",
	I0717 01:00:56.554622   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554626   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.554632   49910 command_runner.go:130] >       },
	I0717 01:00:56.554636   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554642   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554646   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554649   49910 command_runner.go:130] >     },
	I0717 01:00:56.554654   49910 command_runner.go:130] >     {
	I0717 01:00:56.554660   49910 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 01:00:56.554666   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554671   49910 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 01:00:56.554676   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554681   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554688   49910 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 01:00:56.554697   49910 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 01:00:56.554702   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554706   49910 command_runner.go:130] >       "size": "750414",
	I0717 01:00:56.554711   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554716   49910 command_runner.go:130] >         "value": "65535"
	I0717 01:00:56.554721   49910 command_runner.go:130] >       },
	I0717 01:00:56.554725   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554731   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554735   49910 command_runner.go:130] >       "pinned": true
	I0717 01:00:56.554738   49910 command_runner.go:130] >     }
	I0717 01:00:56.554741   49910 command_runner.go:130] >   ]
	I0717 01:00:56.554746   49910 command_runner.go:130] > }
	I0717 01:00:56.555785   49910 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:00:56.555802   49910 crio.go:433] Images already preloaded, skipping extraction
	I0717 01:00:56.555858   49910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:00:56.600246   49910 command_runner.go:130] > {
	I0717 01:00:56.600271   49910 command_runner.go:130] >   "images": [
	I0717 01:00:56.600277   49910 command_runner.go:130] >     {
	I0717 01:00:56.600289   49910 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 01:00:56.600296   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600307   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 01:00:56.600311   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600316   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600324   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 01:00:56.600334   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 01:00:56.600337   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600342   49910 command_runner.go:130] >       "size": "65908273",
	I0717 01:00:56.600346   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600350   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.600357   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600367   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600374   49910 command_runner.go:130] >     },
	I0717 01:00:56.600383   49910 command_runner.go:130] >     {
	I0717 01:00:56.600392   49910 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 01:00:56.600403   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600409   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 01:00:56.600412   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600417   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600425   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 01:00:56.600435   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 01:00:56.600440   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600444   49910 command_runner.go:130] >       "size": "87165492",
	I0717 01:00:56.600450   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600463   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.600473   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600482   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600491   49910 command_runner.go:130] >     },
	I0717 01:00:56.600499   49910 command_runner.go:130] >     {
	I0717 01:00:56.600512   49910 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 01:00:56.600521   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600537   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 01:00:56.600546   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600570   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600586   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 01:00:56.600600   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 01:00:56.600608   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600617   49910 command_runner.go:130] >       "size": "1363676",
	I0717 01:00:56.600627   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600637   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.600647   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600656   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600665   49910 command_runner.go:130] >     },
	I0717 01:00:56.600670   49910 command_runner.go:130] >     {
	I0717 01:00:56.600683   49910 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 01:00:56.600692   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600700   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 01:00:56.600705   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600714   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600730   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 01:00:56.600753   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 01:00:56.600763   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600769   49910 command_runner.go:130] >       "size": "31470524",
	I0717 01:00:56.600775   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600783   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.600787   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600796   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600805   49910 command_runner.go:130] >     },
	I0717 01:00:56.600813   49910 command_runner.go:130] >     {
	I0717 01:00:56.600826   49910 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 01:00:56.600836   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600847   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 01:00:56.600855   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600862   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600875   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 01:00:56.600890   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 01:00:56.600899   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600912   49910 command_runner.go:130] >       "size": "61245718",
	I0717 01:00:56.600921   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600931   49910 command_runner.go:130] >       "username": "nonroot",
	I0717 01:00:56.600940   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600948   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600955   49910 command_runner.go:130] >     },
	I0717 01:00:56.600959   49910 command_runner.go:130] >     {
	I0717 01:00:56.600968   49910 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 01:00:56.600978   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600989   49910 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 01:00:56.600997   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601006   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601021   49910 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 01:00:56.601034   49910 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 01:00:56.601040   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601044   49910 command_runner.go:130] >       "size": "150779692",
	I0717 01:00:56.601050   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601059   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.601068   49910 command_runner.go:130] >       },
	I0717 01:00:56.601077   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601086   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601100   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601108   49910 command_runner.go:130] >     },
	I0717 01:00:56.601116   49910 command_runner.go:130] >     {
	I0717 01:00:56.601124   49910 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 01:00:56.601129   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601137   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 01:00:56.601146   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601155   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601170   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 01:00:56.601184   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 01:00:56.601192   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601201   49910 command_runner.go:130] >       "size": "117609954",
	I0717 01:00:56.601208   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601213   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.601219   49910 command_runner.go:130] >       },
	I0717 01:00:56.601235   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601244   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601254   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601261   49910 command_runner.go:130] >     },
	I0717 01:00:56.601266   49910 command_runner.go:130] >     {
	I0717 01:00:56.601279   49910 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 01:00:56.601289   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601297   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 01:00:56.601301   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601308   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601339   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 01:00:56.601354   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 01:00:56.601363   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601373   49910 command_runner.go:130] >       "size": "112194888",
	I0717 01:00:56.601380   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601384   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.601392   49910 command_runner.go:130] >       },
	I0717 01:00:56.601400   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601410   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601419   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601426   49910 command_runner.go:130] >     },
	I0717 01:00:56.601434   49910 command_runner.go:130] >     {
	I0717 01:00:56.601447   49910 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 01:00:56.601455   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601465   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 01:00:56.601472   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601478   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601492   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 01:00:56.601506   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 01:00:56.601514   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601522   49910 command_runner.go:130] >       "size": "85953433",
	I0717 01:00:56.601531   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.601545   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601552   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601556   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601565   49910 command_runner.go:130] >     },
	I0717 01:00:56.601575   49910 command_runner.go:130] >     {
	I0717 01:00:56.601588   49910 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 01:00:56.601597   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601608   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 01:00:56.601616   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601624   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601637   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 01:00:56.601647   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 01:00:56.601655   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601663   49910 command_runner.go:130] >       "size": "63051080",
	I0717 01:00:56.601671   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601681   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.601689   49910 command_runner.go:130] >       },
	I0717 01:00:56.601698   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601707   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601715   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601726   49910 command_runner.go:130] >     },
	I0717 01:00:56.601734   49910 command_runner.go:130] >     {
	I0717 01:00:56.601745   49910 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 01:00:56.601754   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601764   49910 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 01:00:56.601772   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601782   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601796   49910 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 01:00:56.601808   49910 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 01:00:56.601814   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601820   49910 command_runner.go:130] >       "size": "750414",
	I0717 01:00:56.601829   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601840   49910 command_runner.go:130] >         "value": "65535"
	I0717 01:00:56.601848   49910 command_runner.go:130] >       },
	I0717 01:00:56.601857   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601865   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601874   49910 command_runner.go:130] >       "pinned": true
	I0717 01:00:56.601881   49910 command_runner.go:130] >     }
	I0717 01:00:56.601887   49910 command_runner.go:130] >   ]
	I0717 01:00:56.601893   49910 command_runner.go:130] > }
	I0717 01:00:56.602054   49910 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:00:56.602066   49910 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:00:56.602073   49910 kubeadm.go:934] updating node { 192.168.39.36 8443 v1.30.2 crio true true} ...
	I0717 01:00:56.602323   49910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-905682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-905682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:00:56.602417   49910 ssh_runner.go:195] Run: crio config
	I0717 01:00:56.642993   49910 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 01:00:56.643024   49910 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 01:00:56.643033   49910 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 01:00:56.643037   49910 command_runner.go:130] > #
	I0717 01:00:56.643047   49910 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 01:00:56.643053   49910 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 01:00:56.643059   49910 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 01:00:56.643065   49910 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 01:00:56.643069   49910 command_runner.go:130] > # reload'.
	I0717 01:00:56.643074   49910 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 01:00:56.643081   49910 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 01:00:56.643101   49910 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 01:00:56.643113   49910 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 01:00:56.643122   49910 command_runner.go:130] > [crio]
	I0717 01:00:56.643131   49910 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 01:00:56.643142   49910 command_runner.go:130] > # containers images, in this directory.
	I0717 01:00:56.643153   49910 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 01:00:56.643172   49910 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 01:00:56.643183   49910 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 01:00:56.643195   49910 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0717 01:00:56.643257   49910 command_runner.go:130] > # imagestore = ""
	I0717 01:00:56.643277   49910 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 01:00:56.643288   49910 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 01:00:56.643385   49910 command_runner.go:130] > storage_driver = "overlay"
	I0717 01:00:56.643399   49910 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 01:00:56.643408   49910 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 01:00:56.643415   49910 command_runner.go:130] > storage_option = [
	I0717 01:00:56.643513   49910 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 01:00:56.643567   49910 command_runner.go:130] > ]
	I0717 01:00:56.643597   49910 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 01:00:56.643610   49910 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 01:00:56.643871   49910 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 01:00:56.643886   49910 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 01:00:56.643895   49910 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 01:00:56.643901   49910 command_runner.go:130] > # always happen on a node reboot
	I0717 01:00:56.644130   49910 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 01:00:56.644155   49910 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 01:00:56.644165   49910 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 01:00:56.644174   49910 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 01:00:56.644353   49910 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0717 01:00:56.644371   49910 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 01:00:56.644386   49910 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 01:00:56.644630   49910 command_runner.go:130] > # internal_wipe = true
	I0717 01:00:56.644647   49910 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0717 01:00:56.644656   49910 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0717 01:00:56.644978   49910 command_runner.go:130] > # internal_repair = false
	I0717 01:00:56.644989   49910 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 01:00:56.644999   49910 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 01:00:56.645008   49910 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 01:00:56.645199   49910 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 01:00:56.645214   49910 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 01:00:56.645220   49910 command_runner.go:130] > [crio.api]
	I0717 01:00:56.645231   49910 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 01:00:56.645451   49910 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 01:00:56.645465   49910 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 01:00:56.645764   49910 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 01:00:56.645782   49910 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 01:00:56.645791   49910 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 01:00:56.645986   49910 command_runner.go:130] > # stream_port = "0"
	I0717 01:00:56.645996   49910 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 01:00:56.646261   49910 command_runner.go:130] > # stream_enable_tls = false
	I0717 01:00:56.646269   49910 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 01:00:56.646485   49910 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 01:00:56.646499   49910 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 01:00:56.646510   49910 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 01:00:56.646519   49910 command_runner.go:130] > # minutes.
	I0717 01:00:56.646642   49910 command_runner.go:130] > # stream_tls_cert = ""
	I0717 01:00:56.646653   49910 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 01:00:56.646659   49910 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 01:00:56.646907   49910 command_runner.go:130] > # stream_tls_key = ""
	I0717 01:00:56.646916   49910 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 01:00:56.646921   49910 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 01:00:56.646942   49910 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 01:00:56.647074   49910 command_runner.go:130] > # stream_tls_ca = ""
	I0717 01:00:56.647101   49910 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 01:00:56.647219   49910 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 01:00:56.647234   49910 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 01:00:56.647367   49910 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 01:00:56.647383   49910 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 01:00:56.647391   49910 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 01:00:56.647399   49910 command_runner.go:130] > [crio.runtime]
	I0717 01:00:56.647411   49910 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 01:00:56.647423   49910 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 01:00:56.647430   49910 command_runner.go:130] > # "nofile=1024:2048"
	I0717 01:00:56.647444   49910 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 01:00:56.647487   49910 command_runner.go:130] > # default_ulimits = [
	I0717 01:00:56.647621   49910 command_runner.go:130] > # ]
	I0717 01:00:56.647637   49910 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 01:00:56.647854   49910 command_runner.go:130] > # no_pivot = false
	I0717 01:00:56.647869   49910 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 01:00:56.647879   49910 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 01:00:56.647889   49910 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 01:00:56.647902   49910 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 01:00:56.647909   49910 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 01:00:56.647923   49910 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 01:00:56.647934   49910 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 01:00:56.647945   49910 command_runner.go:130] > # Cgroup setting for conmon
	I0717 01:00:56.647959   49910 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 01:00:56.647969   49910 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 01:00:56.647982   49910 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 01:00:56.647993   49910 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 01:00:56.648004   49910 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 01:00:56.648014   49910 command_runner.go:130] > conmon_env = [
	I0717 01:00:56.648023   49910 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 01:00:56.648032   49910 command_runner.go:130] > ]
	I0717 01:00:56.648040   49910 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 01:00:56.648051   49910 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 01:00:56.648064   49910 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 01:00:56.648079   49910 command_runner.go:130] > # default_env = [
	I0717 01:00:56.648092   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648101   49910 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 01:00:56.648125   49910 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0717 01:00:56.648132   49910 command_runner.go:130] > # selinux = false
	I0717 01:00:56.648138   49910 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 01:00:56.648144   49910 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 01:00:56.648154   49910 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 01:00:56.648161   49910 command_runner.go:130] > # seccomp_profile = ""
	I0717 01:00:56.648169   49910 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 01:00:56.648181   49910 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 01:00:56.648195   49910 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 01:00:56.648206   49910 command_runner.go:130] > # which might increase security.
	I0717 01:00:56.648218   49910 command_runner.go:130] > # This option is currently deprecated,
	I0717 01:00:56.648228   49910 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0717 01:00:56.648244   49910 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 01:00:56.648254   49910 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 01:00:56.648263   49910 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 01:00:56.648277   49910 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 01:00:56.648290   49910 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 01:00:56.648300   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.648310   49910 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 01:00:56.648323   49910 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 01:00:56.648333   49910 command_runner.go:130] > # the cgroup blockio controller.
	I0717 01:00:56.648340   49910 command_runner.go:130] > # blockio_config_file = ""
	I0717 01:00:56.648353   49910 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0717 01:00:56.648362   49910 command_runner.go:130] > # blockio parameters.
	I0717 01:00:56.648370   49910 command_runner.go:130] > # blockio_reload = false
	I0717 01:00:56.648382   49910 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 01:00:56.648389   49910 command_runner.go:130] > # irqbalance daemon.
	I0717 01:00:56.648394   49910 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 01:00:56.648405   49910 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0717 01:00:56.648418   49910 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0717 01:00:56.648431   49910 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0717 01:00:56.648445   49910 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0717 01:00:56.648457   49910 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 01:00:56.648470   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.648479   49910 command_runner.go:130] > # rdt_config_file = ""
	I0717 01:00:56.648488   49910 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 01:00:56.648504   49910 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 01:00:56.648577   49910 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 01:00:56.648590   49910 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 01:00:56.648601   49910 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 01:00:56.648613   49910 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 01:00:56.648622   49910 command_runner.go:130] > # will be added.
	I0717 01:00:56.648629   49910 command_runner.go:130] > # default_capabilities = [
	I0717 01:00:56.648638   49910 command_runner.go:130] > # 	"CHOWN",
	I0717 01:00:56.648643   49910 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 01:00:56.648649   49910 command_runner.go:130] > # 	"FSETID",
	I0717 01:00:56.648653   49910 command_runner.go:130] > # 	"FOWNER",
	I0717 01:00:56.648656   49910 command_runner.go:130] > # 	"SETGID",
	I0717 01:00:56.648660   49910 command_runner.go:130] > # 	"SETUID",
	I0717 01:00:56.648663   49910 command_runner.go:130] > # 	"SETPCAP",
	I0717 01:00:56.648667   49910 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 01:00:56.648671   49910 command_runner.go:130] > # 	"KILL",
	I0717 01:00:56.648674   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648681   49910 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 01:00:56.648690   49910 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 01:00:56.648694   49910 command_runner.go:130] > # add_inheritable_capabilities = false
	I0717 01:00:56.648700   49910 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 01:00:56.648707   49910 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 01:00:56.648711   49910 command_runner.go:130] > default_sysctls = [
	I0717 01:00:56.648716   49910 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0717 01:00:56.648719   49910 command_runner.go:130] > ]
	I0717 01:00:56.648723   49910 command_runner.go:130] > # List of devices on the host that a
	I0717 01:00:56.648730   49910 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 01:00:56.648734   49910 command_runner.go:130] > # allowed_devices = [
	I0717 01:00:56.648738   49910 command_runner.go:130] > # 	"/dev/fuse",
	I0717 01:00:56.648741   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648745   49910 command_runner.go:130] > # List of additional devices. specified as
	I0717 01:00:56.648752   49910 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 01:00:56.648759   49910 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 01:00:56.648764   49910 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 01:00:56.648770   49910 command_runner.go:130] > # additional_devices = [
	I0717 01:00:56.648773   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648785   49910 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 01:00:56.648791   49910 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 01:00:56.648797   49910 command_runner.go:130] > # 	"/etc/cdi",
	I0717 01:00:56.648803   49910 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 01:00:56.648806   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648812   49910 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 01:00:56.648820   49910 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 01:00:56.648824   49910 command_runner.go:130] > # Defaults to false.
	I0717 01:00:56.648828   49910 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 01:00:56.648837   49910 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 01:00:56.648844   49910 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 01:00:56.648848   49910 command_runner.go:130] > # hooks_dir = [
	I0717 01:00:56.648941   49910 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 01:00:56.648952   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648961   49910 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 01:00:56.648971   49910 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 01:00:56.648979   49910 command_runner.go:130] > # its default mounts from the following two files:
	I0717 01:00:56.648987   49910 command_runner.go:130] > #
	I0717 01:00:56.648997   49910 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 01:00:56.649008   49910 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 01:00:56.649017   49910 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 01:00:56.649026   49910 command_runner.go:130] > #
	I0717 01:00:56.649041   49910 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 01:00:56.649054   49910 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 01:00:56.649067   49910 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 01:00:56.649077   49910 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 01:00:56.649083   49910 command_runner.go:130] > #
	I0717 01:00:56.649094   49910 command_runner.go:130] > # default_mounts_file = ""
	I0717 01:00:56.649104   49910 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 01:00:56.649110   49910 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 01:00:56.649114   49910 command_runner.go:130] > pids_limit = 1024
	I0717 01:00:56.649120   49910 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 01:00:56.649127   49910 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 01:00:56.649139   49910 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 01:00:56.649154   49910 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 01:00:56.649163   49910 command_runner.go:130] > # log_size_max = -1
	I0717 01:00:56.649180   49910 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0717 01:00:56.649190   49910 command_runner.go:130] > # log_to_journald = false
	I0717 01:00:56.649202   49910 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 01:00:56.649210   49910 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 01:00:56.649219   49910 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 01:00:56.649229   49910 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 01:00:56.649238   49910 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 01:00:56.649248   49910 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 01:00:56.649256   49910 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 01:00:56.649263   49910 command_runner.go:130] > # read_only = false
	I0717 01:00:56.649269   49910 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 01:00:56.649275   49910 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 01:00:56.649279   49910 command_runner.go:130] > # live configuration reload.
	I0717 01:00:56.649283   49910 command_runner.go:130] > # log_level = "info"
	I0717 01:00:56.649288   49910 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 01:00:56.649299   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.649304   49910 command_runner.go:130] > # log_filter = ""
	I0717 01:00:56.649310   49910 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 01:00:56.649316   49910 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 01:00:56.649321   49910 command_runner.go:130] > # separated by comma.
	I0717 01:00:56.649328   49910 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:00:56.649335   49910 command_runner.go:130] > # uid_mappings = ""
	I0717 01:00:56.649341   49910 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 01:00:56.649349   49910 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 01:00:56.649353   49910 command_runner.go:130] > # separated by comma.
	I0717 01:00:56.649362   49910 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:00:56.649366   49910 command_runner.go:130] > # gid_mappings = ""
	I0717 01:00:56.649372   49910 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 01:00:56.649380   49910 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 01:00:56.649385   49910 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 01:00:56.649394   49910 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:00:56.649398   49910 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 01:00:56.649405   49910 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 01:00:56.649411   49910 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 01:00:56.649417   49910 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 01:00:56.649424   49910 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:00:56.649435   49910 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 01:00:56.649443   49910 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 01:00:56.649449   49910 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 01:00:56.649457   49910 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 01:00:56.649461   49910 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 01:00:56.649466   49910 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 01:00:56.649473   49910 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 01:00:56.649478   49910 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 01:00:56.649484   49910 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 01:00:56.649488   49910 command_runner.go:130] > drop_infra_ctr = false
	I0717 01:00:56.649496   49910 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 01:00:56.649501   49910 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 01:00:56.649510   49910 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 01:00:56.649515   49910 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 01:00:56.649523   49910 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0717 01:00:56.649528   49910 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0717 01:00:56.649535   49910 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0717 01:00:56.649540   49910 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0717 01:00:56.649546   49910 command_runner.go:130] > # shared_cpuset = ""
	I0717 01:00:56.649551   49910 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 01:00:56.649556   49910 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 01:00:56.649560   49910 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 01:00:56.649566   49910 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 01:00:56.649571   49910 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 01:00:56.649576   49910 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0717 01:00:56.649587   49910 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0717 01:00:56.649592   49910 command_runner.go:130] > # enable_criu_support = false
	I0717 01:00:56.649597   49910 command_runner.go:130] > # Enable/disable the generation of the container,
	I0717 01:00:56.649603   49910 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0717 01:00:56.649608   49910 command_runner.go:130] > # enable_pod_events = false
	I0717 01:00:56.649613   49910 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 01:00:56.649620   49910 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 01:00:56.649625   49910 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0717 01:00:56.649631   49910 command_runner.go:130] > # default_runtime = "runc"
	I0717 01:00:56.649636   49910 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 01:00:56.649644   49910 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 01:00:56.649658   49910 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0717 01:00:56.649665   49910 command_runner.go:130] > # creation as a file is not desired either.
	I0717 01:00:56.649673   49910 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 01:00:56.649679   49910 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 01:00:56.649683   49910 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 01:00:56.649686   49910 command_runner.go:130] > # ]
	I0717 01:00:56.649692   49910 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 01:00:56.649700   49910 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 01:00:56.649706   49910 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0717 01:00:56.649712   49910 command_runner.go:130] > # Each entry in the table should follow the format:
	I0717 01:00:56.649715   49910 command_runner.go:130] > #
	I0717 01:00:56.649720   49910 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0717 01:00:56.649724   49910 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0717 01:00:56.649774   49910 command_runner.go:130] > # runtime_type = "oci"
	I0717 01:00:56.649781   49910 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0717 01:00:56.649785   49910 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0717 01:00:56.649789   49910 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0717 01:00:56.649793   49910 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0717 01:00:56.649797   49910 command_runner.go:130] > # monitor_env = []
	I0717 01:00:56.649802   49910 command_runner.go:130] > # privileged_without_host_devices = false
	I0717 01:00:56.649808   49910 command_runner.go:130] > # allowed_annotations = []
	I0717 01:00:56.649813   49910 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0717 01:00:56.649818   49910 command_runner.go:130] > # Where:
	I0717 01:00:56.649825   49910 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0717 01:00:56.649835   49910 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0717 01:00:56.649843   49910 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 01:00:56.649849   49910 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 01:00:56.649854   49910 command_runner.go:130] > #   in $PATH.
	I0717 01:00:56.649860   49910 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0717 01:00:56.649867   49910 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 01:00:56.649874   49910 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0717 01:00:56.649882   49910 command_runner.go:130] > #   state.
	I0717 01:00:56.649891   49910 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 01:00:56.649902   49910 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 01:00:56.649912   49910 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 01:00:56.649924   49910 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 01:00:56.649941   49910 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 01:00:56.649950   49910 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 01:00:56.649954   49910 command_runner.go:130] > #   The currently recognized values are:
	I0717 01:00:56.649963   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 01:00:56.649970   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 01:00:56.649978   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 01:00:56.649983   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 01:00:56.649999   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 01:00:56.650013   49910 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 01:00:56.650025   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0717 01:00:56.650037   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0717 01:00:56.650048   49910 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 01:00:56.650055   49910 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0717 01:00:56.650059   49910 command_runner.go:130] > #   deprecated option "conmon".
	I0717 01:00:56.650068   49910 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0717 01:00:56.650073   49910 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0717 01:00:56.650081   49910 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0717 01:00:56.650088   49910 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 01:00:56.650098   49910 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0717 01:00:56.650109   49910 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0717 01:00:56.650120   49910 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0717 01:00:56.650132   49910 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0717 01:00:56.650137   49910 command_runner.go:130] > #
	I0717 01:00:56.650145   49910 command_runner.go:130] > # Using the seccomp notifier feature:
	I0717 01:00:56.650153   49910 command_runner.go:130] > #
	I0717 01:00:56.650162   49910 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0717 01:00:56.650171   49910 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0717 01:00:56.650174   49910 command_runner.go:130] > #
	I0717 01:00:56.650180   49910 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0717 01:00:56.650189   49910 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0717 01:00:56.650194   49910 command_runner.go:130] > #
	I0717 01:00:56.650207   49910 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0717 01:00:56.650212   49910 command_runner.go:130] > # feature.
	I0717 01:00:56.650220   49910 command_runner.go:130] > #
	I0717 01:00:56.650230   49910 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0717 01:00:56.650246   49910 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0717 01:00:56.650263   49910 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0717 01:00:56.650274   49910 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0717 01:00:56.650282   49910 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0717 01:00:56.650287   49910 command_runner.go:130] > #
	I0717 01:00:56.650297   49910 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0717 01:00:56.650310   49910 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0717 01:00:56.650314   49910 command_runner.go:130] > #
	I0717 01:00:56.650325   49910 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0717 01:00:56.650337   49910 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0717 01:00:56.650345   49910 command_runner.go:130] > #
	I0717 01:00:56.650355   49910 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0717 01:00:56.650367   49910 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0717 01:00:56.650375   49910 command_runner.go:130] > # limitation.
	I0717 01:00:56.650382   49910 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 01:00:56.650386   49910 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 01:00:56.650395   49910 command_runner.go:130] > runtime_type = "oci"
	I0717 01:00:56.650405   49910 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 01:00:56.650411   49910 command_runner.go:130] > runtime_config_path = ""
	I0717 01:00:56.650422   49910 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0717 01:00:56.650430   49910 command_runner.go:130] > monitor_cgroup = "pod"
	I0717 01:00:56.650442   49910 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 01:00:56.650451   49910 command_runner.go:130] > monitor_env = [
	I0717 01:00:56.650463   49910 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 01:00:56.650470   49910 command_runner.go:130] > ]
	I0717 01:00:56.650475   49910 command_runner.go:130] > privileged_without_host_devices = false
	I0717 01:00:56.650483   49910 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 01:00:56.650494   49910 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 01:00:56.650505   49910 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 01:00:56.650520   49910 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 01:00:56.650535   49910 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 01:00:56.650546   49910 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 01:00:56.650562   49910 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 01:00:56.650573   49910 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 01:00:56.650581   49910 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 01:00:56.650595   49910 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 01:00:56.650602   49910 command_runner.go:130] > # Example:
	I0717 01:00:56.650616   49910 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 01:00:56.650624   49910 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 01:00:56.650632   49910 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 01:00:56.650640   49910 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 01:00:56.650645   49910 command_runner.go:130] > # cpuset = 0
	I0717 01:00:56.650651   49910 command_runner.go:130] > # cpushares = "0-1"
	I0717 01:00:56.650655   49910 command_runner.go:130] > # Where:
	I0717 01:00:56.650659   49910 command_runner.go:130] > # The workload name is workload-type.
	I0717 01:00:56.650667   49910 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 01:00:56.650675   49910 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 01:00:56.650684   49910 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 01:00:56.650697   49910 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 01:00:56.650706   49910 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 01:00:56.650713   49910 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0717 01:00:56.650722   49910 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0717 01:00:56.650729   49910 command_runner.go:130] > # Default value is set to true
	I0717 01:00:56.650736   49910 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0717 01:00:56.650742   49910 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0717 01:00:56.650746   49910 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0717 01:00:56.650750   49910 command_runner.go:130] > # Default value is set to 'false'
	I0717 01:00:56.650757   49910 command_runner.go:130] > # disable_hostport_mapping = false
	I0717 01:00:56.650767   49910 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 01:00:56.650775   49910 command_runner.go:130] > #
	I0717 01:00:56.650784   49910 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 01:00:56.650796   49910 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 01:00:56.650805   49910 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 01:00:56.650818   49910 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 01:00:56.650827   49910 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 01:00:56.650831   49910 command_runner.go:130] > [crio.image]
	I0717 01:00:56.650846   49910 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 01:00:56.650857   49910 command_runner.go:130] > # default_transport = "docker://"
	I0717 01:00:56.650867   49910 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 01:00:56.650880   49910 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 01:00:56.650889   49910 command_runner.go:130] > # global_auth_file = ""
	I0717 01:00:56.650897   49910 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 01:00:56.650907   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.650924   49910 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0717 01:00:56.650933   49910 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 01:00:56.650941   49910 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 01:00:56.650953   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.650960   49910 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 01:00:56.650977   49910 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 01:00:56.650989   49910 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 01:00:56.651001   49910 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 01:00:56.651013   49910 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 01:00:56.651021   49910 command_runner.go:130] > # pause_command = "/pause"
	I0717 01:00:56.651027   49910 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0717 01:00:56.651038   49910 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0717 01:00:56.651050   49910 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0717 01:00:56.651062   49910 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0717 01:00:56.651075   49910 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0717 01:00:56.651087   49910 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0717 01:00:56.651096   49910 command_runner.go:130] > # pinned_images = [
	I0717 01:00:56.651101   49910 command_runner.go:130] > # ]
	I0717 01:00:56.651116   49910 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 01:00:56.651124   49910 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 01:00:56.651133   49910 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 01:00:56.651145   49910 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 01:00:56.651154   49910 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 01:00:56.651164   49910 command_runner.go:130] > # signature_policy = ""
	I0717 01:00:56.651173   49910 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0717 01:00:56.651186   49910 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0717 01:00:56.651198   49910 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0717 01:00:56.651224   49910 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0717 01:00:56.651239   49910 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0717 01:00:56.651247   49910 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0717 01:00:56.651259   49910 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 01:00:56.651272   49910 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 01:00:56.651281   49910 command_runner.go:130] > # changing them here.
	I0717 01:00:56.651287   49910 command_runner.go:130] > # insecure_registries = [
	I0717 01:00:56.651295   49910 command_runner.go:130] > # ]
	I0717 01:00:56.651304   49910 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 01:00:56.651317   49910 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 01:00:56.651327   49910 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 01:00:56.651335   49910 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 01:00:56.651342   49910 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 01:00:56.651354   49910 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 01:00:56.651363   49910 command_runner.go:130] > # CNI plugins.
	I0717 01:00:56.651369   49910 command_runner.go:130] > [crio.network]
	I0717 01:00:56.651379   49910 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 01:00:56.651391   49910 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 01:00:56.651399   49910 command_runner.go:130] > # cni_default_network = ""
	I0717 01:00:56.651405   49910 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 01:00:56.651415   49910 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 01:00:56.651427   49910 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 01:00:56.651433   49910 command_runner.go:130] > # plugin_dirs = [
	I0717 01:00:56.651442   49910 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 01:00:56.651454   49910 command_runner.go:130] > # ]
	I0717 01:00:56.651465   49910 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 01:00:56.651474   49910 command_runner.go:130] > [crio.metrics]
	I0717 01:00:56.651481   49910 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 01:00:56.651490   49910 command_runner.go:130] > enable_metrics = true
	I0717 01:00:56.651495   49910 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 01:00:56.651502   49910 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 01:00:56.651511   49910 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 01:00:56.651524   49910 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 01:00:56.651537   49910 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 01:00:56.651546   49910 command_runner.go:130] > # metrics_collectors = [
	I0717 01:00:56.651554   49910 command_runner.go:130] > # 	"operations",
	I0717 01:00:56.651564   49910 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 01:00:56.651571   49910 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 01:00:56.651581   49910 command_runner.go:130] > # 	"operations_errors",
	I0717 01:00:56.651587   49910 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 01:00:56.651594   49910 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 01:00:56.651599   49910 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 01:00:56.651609   49910 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 01:00:56.651617   49910 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 01:00:56.651623   49910 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 01:00:56.651638   49910 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 01:00:56.651648   49910 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0717 01:00:56.651657   49910 command_runner.go:130] > # 	"containers_oom_total",
	I0717 01:00:56.651664   49910 command_runner.go:130] > # 	"containers_oom",
	I0717 01:00:56.651672   49910 command_runner.go:130] > # 	"processes_defunct",
	I0717 01:00:56.651679   49910 command_runner.go:130] > # 	"operations_total",
	I0717 01:00:56.651687   49910 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 01:00:56.651691   49910 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 01:00:56.651700   49910 command_runner.go:130] > # 	"operations_errors_total",
	I0717 01:00:56.651707   49910 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 01:00:56.651718   49910 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 01:00:56.651724   49910 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 01:00:56.651733   49910 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 01:00:56.651740   49910 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 01:00:56.651750   49910 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 01:00:56.651759   49910 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0717 01:00:56.651768   49910 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0717 01:00:56.651773   49910 command_runner.go:130] > # ]
	I0717 01:00:56.651783   49910 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 01:00:56.651789   49910 command_runner.go:130] > # metrics_port = 9090
	I0717 01:00:56.651794   49910 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 01:00:56.651802   49910 command_runner.go:130] > # metrics_socket = ""
	I0717 01:00:56.651814   49910 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 01:00:56.651824   49910 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 01:00:56.651842   49910 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 01:00:56.651859   49910 command_runner.go:130] > # certificate on any modification event.
	I0717 01:00:56.651868   49910 command_runner.go:130] > # metrics_cert = ""
	I0717 01:00:56.651876   49910 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 01:00:56.651887   49910 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 01:00:56.651896   49910 command_runner.go:130] > # metrics_key = ""
	I0717 01:00:56.651914   49910 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 01:00:56.651923   49910 command_runner.go:130] > [crio.tracing]
	I0717 01:00:56.651932   49910 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 01:00:56.651941   49910 command_runner.go:130] > # enable_tracing = false
	I0717 01:00:56.651951   49910 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 01:00:56.651961   49910 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 01:00:56.651985   49910 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0717 01:00:56.651996   49910 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 01:00:56.652005   49910 command_runner.go:130] > # CRI-O NRI configuration.
	I0717 01:00:56.652013   49910 command_runner.go:130] > [crio.nri]
	I0717 01:00:56.652021   49910 command_runner.go:130] > # Globally enable or disable NRI.
	I0717 01:00:56.652028   49910 command_runner.go:130] > # enable_nri = false
	I0717 01:00:56.652035   49910 command_runner.go:130] > # NRI socket to listen on.
	I0717 01:00:56.652046   49910 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0717 01:00:56.652056   49910 command_runner.go:130] > # NRI plugin directory to use.
	I0717 01:00:56.652067   49910 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0717 01:00:56.652074   49910 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0717 01:00:56.652084   49910 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0717 01:00:56.652097   49910 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0717 01:00:56.652105   49910 command_runner.go:130] > # nri_disable_connections = false
	I0717 01:00:56.652111   49910 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0717 01:00:56.652116   49910 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0717 01:00:56.652123   49910 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0717 01:00:56.652128   49910 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0717 01:00:56.652137   49910 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 01:00:56.652145   49910 command_runner.go:130] > [crio.stats]
	I0717 01:00:56.652157   49910 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 01:00:56.652169   49910 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 01:00:56.652179   49910 command_runner.go:130] > # stats_collection_period = 0
	I0717 01:00:56.652217   49910 command_runner.go:130] ! time="2024-07-17 01:00:56.600744877Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0717 01:00:56.652233   49910 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 01:00:56.652408   49910 cni.go:84] Creating CNI manager for ""
	I0717 01:00:56.652422   49910 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 01:00:56.652431   49910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:00:56.652452   49910 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-905682 NodeName:multinode-905682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:00:56.652615   49910 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-905682"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:00:56.652672   49910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:00:56.662674   49910 command_runner.go:130] > kubeadm
	I0717 01:00:56.662696   49910 command_runner.go:130] > kubectl
	I0717 01:00:56.662702   49910 command_runner.go:130] > kubelet
	I0717 01:00:56.662740   49910 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:00:56.662783   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:00:56.671753   49910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0717 01:00:56.687947   49910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:00:56.704313   49910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 01:00:56.720853   49910 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0717 01:00:56.724527   49910 command_runner.go:130] > 192.168.39.36	control-plane.minikube.internal
	I0717 01:00:56.724604   49910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:00:56.860730   49910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:00:56.876014   49910 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682 for IP: 192.168.39.36
	I0717 01:00:56.876059   49910 certs.go:194] generating shared ca certs ...
	I0717 01:00:56.876097   49910 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:00:56.876423   49910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:00:56.876520   49910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:00:56.876533   49910 certs.go:256] generating profile certs ...
	I0717 01:00:56.876672   49910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/client.key
	I0717 01:00:56.876751   49910 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.key.bbaa5003
	I0717 01:00:56.876797   49910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.key
	I0717 01:00:56.876812   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 01:00:56.876831   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 01:00:56.876848   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 01:00:56.876864   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 01:00:56.876879   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 01:00:56.876899   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 01:00:56.876917   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 01:00:56.876933   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 01:00:56.876993   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:00:56.877031   49910 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:00:56.877043   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:00:56.877076   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:00:56.877153   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:00:56.877193   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:00:56.877248   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:00:56.877287   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 01:00:56.877304   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 01:00:56.877320   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:56.878208   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:00:56.902519   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:00:56.925183   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:00:56.948288   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:00:56.971027   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:00:56.996074   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:00:57.019717   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:00:57.045330   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:00:57.070124   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:00:57.095222   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:00:57.119257   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:00:57.142108   49910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:00:57.158132   49910 ssh_runner.go:195] Run: openssl version
	I0717 01:00:57.164070   49910 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 01:00:57.164166   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:00:57.174750   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:00:57.179075   49910 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:00:57.179095   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:00:57.179137   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:00:57.184502   49910 command_runner.go:130] > 51391683
	I0717 01:00:57.184603   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:00:57.193432   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:00:57.203397   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:00:57.207341   49910 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:00:57.207426   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:00:57.207456   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:00:57.212585   49910 command_runner.go:130] > 3ec20f2e
	I0717 01:00:57.212775   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:00:57.221390   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:00:57.231560   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:57.236061   49910 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:57.236087   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:57.236122   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:57.241689   49910 command_runner.go:130] > b5213941
	I0717 01:00:57.241764   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:00:57.251294   49910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:00:57.256083   49910 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:00:57.256108   49910 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 01:00:57.256117   49910 command_runner.go:130] > Device: 253,1	Inode: 1057301     Links: 1
	I0717 01:00:57.256127   49910 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 01:00:57.256162   49910 command_runner.go:130] > Access: 2024-07-17 00:54:16.870072117 +0000
	I0717 01:00:57.256179   49910 command_runner.go:130] > Modify: 2024-07-17 00:54:16.870072117 +0000
	I0717 01:00:57.256193   49910 command_runner.go:130] > Change: 2024-07-17 00:54:16.870072117 +0000
	I0717 01:00:57.256200   49910 command_runner.go:130] >  Birth: 2024-07-17 00:54:16.870072117 +0000
	I0717 01:00:57.256253   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:00:57.261687   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.261834   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:00:57.267231   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.267302   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:00:57.273037   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.273314   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:00:57.278571   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.278707   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:00:57.284313   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.284476   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:00:57.290138   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.290378   49910 kubeadm.go:392] StartCluster: {Name:multinode-905682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-905682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.142 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:00:57.290482   49910 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:00:57.290529   49910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:00:57.334763   49910 command_runner.go:130] > 9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40
	I0717 01:00:57.334792   49910 command_runner.go:130] > c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad
	I0717 01:00:57.334801   49910 command_runner.go:130] > b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d
	I0717 01:00:57.334807   49910 command_runner.go:130] > 721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a
	I0717 01:00:57.334813   49910 command_runner.go:130] > 1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514
	I0717 01:00:57.334818   49910 command_runner.go:130] > 6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe
	I0717 01:00:57.334823   49910 command_runner.go:130] > d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16
	I0717 01:00:57.334830   49910 command_runner.go:130] > aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e
	I0717 01:00:57.334850   49910 cri.go:89] found id: "9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40"
	I0717 01:00:57.334861   49910 cri.go:89] found id: "c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad"
	I0717 01:00:57.334866   49910 cri.go:89] found id: "b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d"
	I0717 01:00:57.334871   49910 cri.go:89] found id: "721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a"
	I0717 01:00:57.334875   49910 cri.go:89] found id: "1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514"
	I0717 01:00:57.334879   49910 cri.go:89] found id: "6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe"
	I0717 01:00:57.334884   49910 cri.go:89] found id: "d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16"
	I0717 01:00:57.334890   49910 cri.go:89] found id: "aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e"
	I0717 01:00:57.334893   49910 cri.go:89] found id: ""
	I0717 01:00:57.334932   49910 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.208134658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178160208111660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4c4e8f7-08a5-496e-932f-9bf8ecef8f0b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.208613810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7db69b7-7d8e-479b-9ad6-e115e1663663 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.208690802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7db69b7-7d8e-479b-9ad6-e115e1663663 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.209093782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df6e185f483edcd114c5b5e1e069a749aa09fae6aea83a64ca5f00aa3aabe122,PodSandboxId:2c42a8a363a16c1561b4f191e85d2f9e4640c3dcdc85c1420c24fb4df0310f1a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721178097788168943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779,PodSandboxId:fa902dff8a01f1da5b3fde6de6ac65d4e90d9f8e3c0f911d9abe06cb4b7deb1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721178064307786299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c,PodSandboxId:0075107f7665f391900a232f69c36f579cb4c0a44ba25b65ef771987bfc97c63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721178064255156031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18
acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b,PodSandboxId:8b1ebf053e2c9f73ea821ef74c1a72efa09b99dec311f5d57cc9e350a6d9ca40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721178064115713526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]
string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b81e196cebfdae7bd4c4fea9f33fb3032641523486a3af91af989984dc20a83,PodSandboxId:cb53c056124ab490e1a5a107da3c7088a65775cd8e69e9d89689135aebfa2aa0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178064147281768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5,PodSandboxId:b1359287405ff1a0bcff6a64a87a47e6a90edb07d269f1806e95b7e5e23df21e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721178060352264994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6bab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a493abca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d,PodSandboxId:e42479eaa869c667fe11416b5f4f1c71cc7d94cc889e2931ca0d51f87edb600e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721178060273640525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838
e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64,PodSandboxId:0f09420e5dbdb83609705535eabbea00df36dbe358988ba512151614e6cefab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721178060187402452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839952580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677,PodSandboxId:ca608ede8b04ed625008072c417ba75de623976f4ffbac722578006ff6007dfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721178060167184071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a6fadd9efc4798dd9696ee44a8d4904525114a1b7f68c3f1eb84af01d321b0,PodSandboxId:6f750ea9aba5f5a09faeeb78de83406a0ca1c80f325c37d33e49abb36353ccd0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177743825240928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40,PodSandboxId:80dbc679f84500871a825d3df2b7f343feea793940183b016ec69ead09dfd547,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177695984538433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad,PodSandboxId:90dbf5cb53d63b1007b96ac2f15b3ab5addf7c91c47aee5e35979b408bdf7c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177695971674238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d,PodSandboxId:f9ff89031ae51ec6fa95a38321345b3aa2bc57bf8751c5088f062689405608ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721177684024521477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a,PodSandboxId:d5505d12e4eea6371d163a87a0a0fad1a36f4638a55faa0ec6bd8670095c9a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177682037117710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514,PodSandboxId:9ab38025fdf530e56aa514af0c177da6084a28291f77921d67bc21075d30978b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721177661075973367,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16,PodSandboxId:6ba4af0d3ccef8a42ebd9e065840321dbe05bbbfdb4264d02d4fb4560fe448fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177660981349473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6b
ab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.container.hash: a493abca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe,PodSandboxId:0537e60ae6cb6e20e256f53a4c96c5849b669bc05f3633b31dcd1dae06faa155,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177661017336925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83995
2580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e,PodSandboxId:7294fa65d3f2282a6e67ad2366363868614dd6aee47e40788668d22f29d60892,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177660961296009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map
[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7db69b7-7d8e-479b-9ad6-e115e1663663 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.252338501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b583d0c8-05cb-4094-bcf9-adafa8800b0d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.252512884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b583d0c8-05cb-4094-bcf9-adafa8800b0d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.253749431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a5dbc5f-8440-47a5-9d54-1af8ce87517c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.254231517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178160254200203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a5dbc5f-8440-47a5-9d54-1af8ce87517c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.254842971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27d87c59-1ee7-4509-9036-be53928b4d2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.254962316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27d87c59-1ee7-4509-9036-be53928b4d2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.255305437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df6e185f483edcd114c5b5e1e069a749aa09fae6aea83a64ca5f00aa3aabe122,PodSandboxId:2c42a8a363a16c1561b4f191e85d2f9e4640c3dcdc85c1420c24fb4df0310f1a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721178097788168943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779,PodSandboxId:fa902dff8a01f1da5b3fde6de6ac65d4e90d9f8e3c0f911d9abe06cb4b7deb1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721178064307786299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c,PodSandboxId:0075107f7665f391900a232f69c36f579cb4c0a44ba25b65ef771987bfc97c63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721178064255156031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18
acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b,PodSandboxId:8b1ebf053e2c9f73ea821ef74c1a72efa09b99dec311f5d57cc9e350a6d9ca40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721178064115713526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]
string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b81e196cebfdae7bd4c4fea9f33fb3032641523486a3af91af989984dc20a83,PodSandboxId:cb53c056124ab490e1a5a107da3c7088a65775cd8e69e9d89689135aebfa2aa0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178064147281768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5,PodSandboxId:b1359287405ff1a0bcff6a64a87a47e6a90edb07d269f1806e95b7e5e23df21e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721178060352264994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6bab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a493abca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d,PodSandboxId:e42479eaa869c667fe11416b5f4f1c71cc7d94cc889e2931ca0d51f87edb600e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721178060273640525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838
e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64,PodSandboxId:0f09420e5dbdb83609705535eabbea00df36dbe358988ba512151614e6cefab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721178060187402452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839952580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677,PodSandboxId:ca608ede8b04ed625008072c417ba75de623976f4ffbac722578006ff6007dfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721178060167184071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a6fadd9efc4798dd9696ee44a8d4904525114a1b7f68c3f1eb84af01d321b0,PodSandboxId:6f750ea9aba5f5a09faeeb78de83406a0ca1c80f325c37d33e49abb36353ccd0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177743825240928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40,PodSandboxId:80dbc679f84500871a825d3df2b7f343feea793940183b016ec69ead09dfd547,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177695984538433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad,PodSandboxId:90dbf5cb53d63b1007b96ac2f15b3ab5addf7c91c47aee5e35979b408bdf7c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177695971674238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d,PodSandboxId:f9ff89031ae51ec6fa95a38321345b3aa2bc57bf8751c5088f062689405608ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721177684024521477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a,PodSandboxId:d5505d12e4eea6371d163a87a0a0fad1a36f4638a55faa0ec6bd8670095c9a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177682037117710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514,PodSandboxId:9ab38025fdf530e56aa514af0c177da6084a28291f77921d67bc21075d30978b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721177661075973367,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16,PodSandboxId:6ba4af0d3ccef8a42ebd9e065840321dbe05bbbfdb4264d02d4fb4560fe448fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177660981349473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6b
ab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.container.hash: a493abca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe,PodSandboxId:0537e60ae6cb6e20e256f53a4c96c5849b669bc05f3633b31dcd1dae06faa155,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177661017336925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83995
2580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e,PodSandboxId:7294fa65d3f2282a6e67ad2366363868614dd6aee47e40788668d22f29d60892,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177660961296009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map
[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27d87c59-1ee7-4509-9036-be53928b4d2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.296841661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98beddf5-190f-4ae9-a53d-1a11179d547b name=/runtime.v1.RuntimeService/Version
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.297034058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98beddf5-190f-4ae9-a53d-1a11179d547b name=/runtime.v1.RuntimeService/Version
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.298578540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=571eaed8-6ead-429f-bdfd-90d11e4ed423 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.299049015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178160299023670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=571eaed8-6ead-429f-bdfd-90d11e4ed423 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.299575581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7f73fea-b8b3-4fbb-9478-729d2748c7b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.299632390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7f73fea-b8b3-4fbb-9478-729d2748c7b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.300044454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df6e185f483edcd114c5b5e1e069a749aa09fae6aea83a64ca5f00aa3aabe122,PodSandboxId:2c42a8a363a16c1561b4f191e85d2f9e4640c3dcdc85c1420c24fb4df0310f1a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721178097788168943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779,PodSandboxId:fa902dff8a01f1da5b3fde6de6ac65d4e90d9f8e3c0f911d9abe06cb4b7deb1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721178064307786299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c,PodSandboxId:0075107f7665f391900a232f69c36f579cb4c0a44ba25b65ef771987bfc97c63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721178064255156031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18
acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b,PodSandboxId:8b1ebf053e2c9f73ea821ef74c1a72efa09b99dec311f5d57cc9e350a6d9ca40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721178064115713526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]
string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b81e196cebfdae7bd4c4fea9f33fb3032641523486a3af91af989984dc20a83,PodSandboxId:cb53c056124ab490e1a5a107da3c7088a65775cd8e69e9d89689135aebfa2aa0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178064147281768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5,PodSandboxId:b1359287405ff1a0bcff6a64a87a47e6a90edb07d269f1806e95b7e5e23df21e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721178060352264994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6bab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a493abca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d,PodSandboxId:e42479eaa869c667fe11416b5f4f1c71cc7d94cc889e2931ca0d51f87edb600e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721178060273640525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838
e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64,PodSandboxId:0f09420e5dbdb83609705535eabbea00df36dbe358988ba512151614e6cefab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721178060187402452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839952580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677,PodSandboxId:ca608ede8b04ed625008072c417ba75de623976f4ffbac722578006ff6007dfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721178060167184071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a6fadd9efc4798dd9696ee44a8d4904525114a1b7f68c3f1eb84af01d321b0,PodSandboxId:6f750ea9aba5f5a09faeeb78de83406a0ca1c80f325c37d33e49abb36353ccd0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177743825240928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40,PodSandboxId:80dbc679f84500871a825d3df2b7f343feea793940183b016ec69ead09dfd547,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177695984538433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad,PodSandboxId:90dbf5cb53d63b1007b96ac2f15b3ab5addf7c91c47aee5e35979b408bdf7c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177695971674238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d,PodSandboxId:f9ff89031ae51ec6fa95a38321345b3aa2bc57bf8751c5088f062689405608ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721177684024521477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a,PodSandboxId:d5505d12e4eea6371d163a87a0a0fad1a36f4638a55faa0ec6bd8670095c9a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177682037117710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514,PodSandboxId:9ab38025fdf530e56aa514af0c177da6084a28291f77921d67bc21075d30978b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721177661075973367,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16,PodSandboxId:6ba4af0d3ccef8a42ebd9e065840321dbe05bbbfdb4264d02d4fb4560fe448fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177660981349473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6b
ab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.container.hash: a493abca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe,PodSandboxId:0537e60ae6cb6e20e256f53a4c96c5849b669bc05f3633b31dcd1dae06faa155,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177661017336925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83995
2580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e,PodSandboxId:7294fa65d3f2282a6e67ad2366363868614dd6aee47e40788668d22f29d60892,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177660961296009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map
[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7f73fea-b8b3-4fbb-9478-729d2748c7b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.343494211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afe24957-139e-48b0-a64f-c1d5df75680e name=/runtime.v1.RuntimeService/Version
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.343610877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afe24957-139e-48b0-a64f-c1d5df75680e name=/runtime.v1.RuntimeService/Version
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.344723981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0682dad-e981-4271-b730-78c4ce7354ec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.345211125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178160345183111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0682dad-e981-4271-b730-78c4ce7354ec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.345760503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01ae2902-afff-495f-8fa8-9c18c46ca017 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.345818193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01ae2902-afff-495f-8fa8-9c18c46ca017 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:02:40 multinode-905682 crio[2924]: time="2024-07-17 01:02:40.346218286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df6e185f483edcd114c5b5e1e069a749aa09fae6aea83a64ca5f00aa3aabe122,PodSandboxId:2c42a8a363a16c1561b4f191e85d2f9e4640c3dcdc85c1420c24fb4df0310f1a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721178097788168943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779,PodSandboxId:fa902dff8a01f1da5b3fde6de6ac65d4e90d9f8e3c0f911d9abe06cb4b7deb1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721178064307786299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c,PodSandboxId:0075107f7665f391900a232f69c36f579cb4c0a44ba25b65ef771987bfc97c63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721178064255156031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18
acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b,PodSandboxId:8b1ebf053e2c9f73ea821ef74c1a72efa09b99dec311f5d57cc9e350a6d9ca40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721178064115713526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]
string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b81e196cebfdae7bd4c4fea9f33fb3032641523486a3af91af989984dc20a83,PodSandboxId:cb53c056124ab490e1a5a107da3c7088a65775cd8e69e9d89689135aebfa2aa0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178064147281768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5,PodSandboxId:b1359287405ff1a0bcff6a64a87a47e6a90edb07d269f1806e95b7e5e23df21e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721178060352264994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6bab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a493abca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d,PodSandboxId:e42479eaa869c667fe11416b5f4f1c71cc7d94cc889e2931ca0d51f87edb600e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721178060273640525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838
e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64,PodSandboxId:0f09420e5dbdb83609705535eabbea00df36dbe358988ba512151614e6cefab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721178060187402452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839952580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677,PodSandboxId:ca608ede8b04ed625008072c417ba75de623976f4ffbac722578006ff6007dfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721178060167184071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a6fadd9efc4798dd9696ee44a8d4904525114a1b7f68c3f1eb84af01d321b0,PodSandboxId:6f750ea9aba5f5a09faeeb78de83406a0ca1c80f325c37d33e49abb36353ccd0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177743825240928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40,PodSandboxId:80dbc679f84500871a825d3df2b7f343feea793940183b016ec69ead09dfd547,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177695984538433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad,PodSandboxId:90dbf5cb53d63b1007b96ac2f15b3ab5addf7c91c47aee5e35979b408bdf7c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177695971674238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d,PodSandboxId:f9ff89031ae51ec6fa95a38321345b3aa2bc57bf8751c5088f062689405608ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721177684024521477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a,PodSandboxId:d5505d12e4eea6371d163a87a0a0fad1a36f4638a55faa0ec6bd8670095c9a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177682037117710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514,PodSandboxId:9ab38025fdf530e56aa514af0c177da6084a28291f77921d67bc21075d30978b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721177661075973367,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16,PodSandboxId:6ba4af0d3ccef8a42ebd9e065840321dbe05bbbfdb4264d02d4fb4560fe448fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177660981349473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6b
ab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.container.hash: a493abca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe,PodSandboxId:0537e60ae6cb6e20e256f53a4c96c5849b669bc05f3633b31dcd1dae06faa155,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177661017336925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83995
2580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e,PodSandboxId:7294fa65d3f2282a6e67ad2366363868614dd6aee47e40788668d22f29d60892,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177660961296009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map
[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01ae2902-afff-495f-8fa8-9c18c46ca017 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	df6e185f483ed       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   2c42a8a363a16       busybox-fc5497c4f-l7kh7
	7c1aa9c30d3b3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   fa902dff8a01f       coredns-7db6d8ff4d-lsqqt
	efe0882e6b92d       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      About a minute ago   Running             kindnet-cni               1                   0075107f7665f       kindnet-qnxcz
	9b81e196cebfd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   cb53c056124ab       storage-provisioner
	b9435d9f50926       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      About a minute ago   Running             kube-proxy                1                   8b1ebf053e2c9       kube-proxy-ml4v5
	436a07b748e2d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            1                   b1359287405ff       kube-apiserver-multinode-905682
	6eae15fbe2335       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      About a minute ago   Running             kube-scheduler            1                   e42479eaa869c       kube-scheduler-multinode-905682
	a6742398bcf0e       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   1                   0f09420e5dbdb       kube-controller-manager-multinode-905682
	97c2a2c7bd9c8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   ca608ede8b04e       etcd-multinode-905682
	d2a6fadd9efc4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   6f750ea9aba5f       busybox-fc5497c4f-l7kh7
	9ea48c9be6b6a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   80dbc679f8450       coredns-7db6d8ff4d-lsqqt
	c3bf51d1de7ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   90dbf5cb53d63       storage-provisioner
	b8197caff6893       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    7 minutes ago        Exited              kindnet-cni               0                   f9ff89031ae51       kindnet-qnxcz
	721df31d239ea       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago        Exited              kube-proxy                0                   d5505d12e4eea       kube-proxy-ml4v5
	1f10eb4245589       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      8 minutes ago        Exited              kube-scheduler            0                   9ab38025fdf53       kube-scheduler-multinode-905682
	6d976dd7c9b9a       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      8 minutes ago        Exited              kube-controller-manager   0                   0537e60ae6cb6       kube-controller-manager-multinode-905682
	d8de5d5cf3c37       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      8 minutes ago        Exited              kube-apiserver            0                   6ba4af0d3ccef       kube-apiserver-multinode-905682
	aa6b9c507f3cc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   7294fa65d3f22       etcd-multinode-905682
	
	
	==> coredns [7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36122 - 64325 "HINFO IN 2811894640309302459.1035428133850246961. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012559467s
	
	
	==> coredns [9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40] <==
	[INFO] 10.244.0.3:40210 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00164537s
	[INFO] 10.244.0.3:33091 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080103s
	[INFO] 10.244.0.3:44220 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068107s
	[INFO] 10.244.0.3:39279 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001164989s
	[INFO] 10.244.0.3:50172 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000053609s
	[INFO] 10.244.0.3:53946 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076675s
	[INFO] 10.244.0.3:33143 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060419s
	[INFO] 10.244.1.2:38772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119481s
	[INFO] 10.244.1.2:43591 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172508s
	[INFO] 10.244.1.2:44519 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087682s
	[INFO] 10.244.1.2:50162 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198369s
	[INFO] 10.244.0.3:53625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109023s
	[INFO] 10.244.0.3:59185 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086714s
	[INFO] 10.244.0.3:38795 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062683s
	[INFO] 10.244.0.3:44968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120421s
	[INFO] 10.244.1.2:34748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00034098s
	[INFO] 10.244.1.2:38329 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132405s
	[INFO] 10.244.1.2:41014 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157698s
	[INFO] 10.244.1.2:42947 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100469s
	[INFO] 10.244.0.3:41628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142382s
	[INFO] 10.244.0.3:40676 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106177s
	[INFO] 10.244.0.3:44867 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080057s
	[INFO] 10.244.0.3:42798 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065885s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-905682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-905682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-905682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_54_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:54:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-905682
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:02:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:01:03 +0000   Wed, 17 Jul 2024 00:54:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:01:03 +0000   Wed, 17 Jul 2024 00:54:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:01:03 +0000   Wed, 17 Jul 2024 00:54:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:01:03 +0000   Wed, 17 Jul 2024 00:54:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    multinode-905682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06acdb00665d43b4841d6fcbb58dedca
	  System UUID:                06acdb00-665d-43b4-841d-6fcbb58dedca
	  Boot ID:                    f9a3be44-e3ca-44b5-8df1-402904ce325d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l7kh7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 coredns-7db6d8ff4d-lsqqt                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m59s
	  kube-system                 etcd-multinode-905682                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m14s
	  kube-system                 kindnet-qnxcz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m59s
	  kube-system                 kube-apiserver-multinode-905682             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-controller-manager-multinode-905682    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-proxy-ml4v5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	  kube-system                 kube-scheduler-multinode-905682             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m58s                kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m14s                kubelet          Node multinode-905682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m14s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m14s                kubelet          Node multinode-905682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m14s                kubelet          Node multinode-905682 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m14s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m                   node-controller  Node multinode-905682 event: Registered Node multinode-905682 in Controller
	  Normal  NodeReady                7m45s                kubelet          Node multinode-905682 status is now: NodeReady
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node multinode-905682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node multinode-905682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node multinode-905682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                  node-controller  Node multinode-905682 event: Registered Node multinode-905682 in Controller
	
	
	Name:               multinode-905682-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-905682-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-905682
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T01_01_45_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:01:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-905682-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:02:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:02:15 +0000   Wed, 17 Jul 2024 01:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:02:15 +0000   Wed, 17 Jul 2024 01:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:02:15 +0000   Wed, 17 Jul 2024 01:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:02:15 +0000   Wed, 17 Jul 2024 01:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    multinode-905682-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 729fbc2b2fee4b2193c8df67ed8c3dad
	  System UUID:                729fbc2b-2fee-4b21-93c8-df67ed8c3dad
	  Boot ID:                    c9c693a2-7060-474a-91ca-a40287e077f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r7st6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kindnet-tjng8              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m18s
	  kube-system                 kube-proxy-6qxcv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m13s                  kube-proxy  
	  Normal  Starting                 52s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m18s (x2 over 7m18s)  kubelet     Node multinode-905682-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m18s (x2 over 7m18s)  kubelet     Node multinode-905682-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m18s (x2 over 7m18s)  kubelet     Node multinode-905682-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m18s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m                     kubelet     Node multinode-905682-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  56s (x2 over 56s)      kubelet     Node multinode-905682-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x2 over 56s)      kubelet     Node multinode-905682-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x2 over 56s)      kubelet     Node multinode-905682-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  56s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                39s                    kubelet     Node multinode-905682-m02 status is now: NodeReady
	
	
	Name:               multinode-905682-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-905682-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-905682
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T01_02_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:02:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-905682-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:02:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:02:37 +0000   Wed, 17 Jul 2024 01:02:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:02:37 +0000   Wed, 17 Jul 2024 01:02:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:02:37 +0000   Wed, 17 Jul 2024 01:02:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:02:37 +0000   Wed, 17 Jul 2024 01:02:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    multinode-905682-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 83b351e034f94688834d283967141dd8
	  System UUID:                83b351e0-34f9-4688-834d-283967141dd8
	  Boot ID:                    64375b10-5996-4634-ae8e-858d1888d03c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8jr6z       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m28s
	  kube-system                 kube-proxy-6gwfw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m23s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m36s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m28s (x2 over 6m28s)  kubelet     Node multinode-905682-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s (x2 over 6m28s)  kubelet     Node multinode-905682-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s (x2 over 6m28s)  kubelet     Node multinode-905682-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m10s                  kubelet     Node multinode-905682-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m41s (x2 over 5m41s)  kubelet     Node multinode-905682-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m41s (x2 over 5m41s)  kubelet     Node multinode-905682-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m41s (x2 over 5m41s)  kubelet     Node multinode-905682-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m24s                  kubelet     Node multinode-905682-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x2 over 21s)      kubelet     Node multinode-905682-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x2 over 21s)      kubelet     Node multinode-905682-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x2 over 21s)      kubelet     Node multinode-905682-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                     kubelet     Node multinode-905682-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058916] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.179533] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.112965] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.292893] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.088251] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +5.019323] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.062593] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.993169] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.072896] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.176047] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +0.116232] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.644315] kauditd_printk_skb: 60 callbacks suppressed
	[Jul17 00:55] kauditd_printk_skb: 12 callbacks suppressed
	[Jul17 01:00] systemd-fstab-generator[2841]: Ignoring "noauto" option for root device
	[  +0.140264] systemd-fstab-generator[2853]: Ignoring "noauto" option for root device
	[  +0.162012] systemd-fstab-generator[2867]: Ignoring "noauto" option for root device
	[  +0.154796] systemd-fstab-generator[2879]: Ignoring "noauto" option for root device
	[  +0.298338] systemd-fstab-generator[2908]: Ignoring "noauto" option for root device
	[  +2.314401] systemd-fstab-generator[3008]: Ignoring "noauto" option for root device
	[  +2.553502] systemd-fstab-generator[3131]: Ignoring "noauto" option for root device
	[  +0.078692] kauditd_printk_skb: 122 callbacks suppressed
	[Jul17 01:01] kauditd_printk_skb: 82 callbacks suppressed
	[ +11.843518] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.027250] systemd-fstab-generator[3959]: Ignoring "noauto" option for root device
	[ +17.456486] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677] <==
	{"level":"info","ts":"2024-07-17T01:01:00.674086Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:01:00.674148Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:01:00.680112Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:01:00.682721Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-07-17T01:01:00.68496Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-07-17T01:01:00.68688Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"74e924d55c832457","initial-advertise-peer-urls":["https://192.168.39.36:2380"],"listen-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:01:00.687147Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:01:01.977201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:01:01.97727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:01:01.97731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgPreVoteResp from 74e924d55c832457 at term 2"}
	{"level":"info","ts":"2024-07-17T01:01:01.977321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:01:01.977327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgVoteResp from 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-07-17T01:01:01.977338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:01:01.977347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74e924d55c832457 elected leader 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-07-17T01:01:01.981786Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"74e924d55c832457","local-member-attributes":"{Name:multinode-905682 ClientURLs:[https://192.168.39.36:2379]}","request-path":"/0/members/74e924d55c832457/attributes","cluster-id":"4bc1bccd4ea9d8cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:01:01.981845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:01:01.982339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:01:01.984471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:01:01.986141Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-07-17T01:01:01.988984Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:01:01.989017Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:02:24.367564Z","caller":"traceutil/trace.go:171","msg":"trace[637141115] linearizableReadLoop","detail":"{readStateIndex:1222; appliedIndex:1221; }","duration":"209.327662ms","start":"2024-07-17T01:02:24.158211Z","end":"2024-07-17T01:02:24.367539Z","steps":["trace[637141115] 'read index received'  (duration: 209.185311ms)","trace[637141115] 'applied index is now lower than readState.Index'  (duration: 142.021µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:02:24.367855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.575652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-905682-m03\" ","response":"range_response_count:1 size:3118"}
	{"level":"info","ts":"2024-07-17T01:02:24.367988Z","caller":"traceutil/trace.go:171","msg":"trace[397759538] range","detail":"{range_begin:/registry/minions/multinode-905682-m03; range_end:; response_count:1; response_revision:1110; }","duration":"209.790664ms","start":"2024-07-17T01:02:24.158188Z","end":"2024-07-17T01:02:24.367979Z","steps":["trace[397759538] 'agreement among raft nodes before linearized reading'  (duration: 209.521984ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:02:24.368058Z","caller":"traceutil/trace.go:171","msg":"trace[949569059] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"214.511087ms","start":"2024-07-17T01:02:24.153531Z","end":"2024-07-17T01:02:24.368042Z","steps":["trace[949569059] 'process raft request'  (duration: 213.906823ms)"],"step_count":1}
	
	
	==> etcd [aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e] <==
	{"level":"info","ts":"2024-07-17T00:54:22.013126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T00:54:22.013136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74e924d55c832457 elected leader 74e924d55c832457 at term 2"}
	{"level":"info","ts":"2024-07-17T00:54:22.017204Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"74e924d55c832457","local-member-attributes":"{Name:multinode-905682 ClientURLs:[https://192.168.39.36:2379]}","request-path":"/0/members/74e924d55c832457/attributes","cluster-id":"4bc1bccd4ea9d8cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:54:22.017331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:54:22.018298Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:54:22.023453Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-07-17T00:54:22.017471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:54:22.019966Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:54:22.024981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:54:22.0288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:54:22.031078Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4bc1bccd4ea9d8cb","local-member-id":"74e924d55c832457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:54:22.031285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:54:22.031381Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:56:12.372017Z","caller":"traceutil/trace.go:171","msg":"trace[837375532] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"172.152329ms","start":"2024-07-17T00:56:12.199827Z","end":"2024-07-17T00:56:12.371979Z","steps":["trace[837375532] 'process raft request'  (duration: 108.811284ms)","trace[837375532] 'compare'  (duration: 63.171334ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:56:12.372339Z","caller":"traceutil/trace.go:171","msg":"trace[1065103114] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"155.437581ms","start":"2024-07-17T00:56:12.21689Z","end":"2024-07-17T00:56:12.372328Z","steps":["trace[1065103114] 'process raft request'  (duration: 155.202067ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:59:22.278077Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T00:59:22.278201Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-905682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	{"level":"warn","ts":"2024-07-17T00:59:22.27831Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:59:22.278351Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:59:22.278495Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:59:22.278562Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:59:22.325829Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"74e924d55c832457","current-leader-member-id":"74e924d55c832457"}
	{"level":"info","ts":"2024-07-17T00:59:22.332357Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-07-17T00:59:22.332588Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-07-17T00:59:22.33263Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-905682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	
	
	==> kernel <==
	 01:02:40 up 8 min,  0 users,  load average: 0.27, 0.75, 0.48
	Linux multinode-905682 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d] <==
	I0717 00:58:34.888786       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:58:44.880672       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 00:58:44.880746       1 main.go:303] handling current node
	I0717 00:58:44.880779       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 00:58:44.880786       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 00:58:44.881123       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 00:58:44.881153       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:58:54.883826       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 00:58:54.883892       1 main.go:303] handling current node
	I0717 00:58:54.883956       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 00:58:54.883967       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 00:58:54.884162       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 00:58:54.884189       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:59:04.887669       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 00:59:04.887773       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:59:04.887989       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 00:59:04.887999       1 main.go:303] handling current node
	I0717 00:59:04.888021       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 00:59:04.888025       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 00:59:14.881011       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 00:59:14.881070       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:59:14.881227       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 00:59:14.881252       1 main.go:303] handling current node
	I0717 00:59:14.881264       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 00:59:14.881271       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c] <==
	I0717 01:01:55.279266       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 01:02:05.278427       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:02:05.278632       1 main.go:303] handling current node
	I0717 01:02:05.278683       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:02:05.278713       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:02:05.278893       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 01:02:05.279039       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 01:02:15.278478       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:02:15.278603       1 main.go:303] handling current node
	I0717 01:02:15.278629       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:02:15.278647       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:02:15.278780       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 01:02:15.278829       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 01:02:25.281172       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:02:25.281321       1 main.go:303] handling current node
	I0717 01:02:25.281356       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:02:25.281455       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:02:25.281732       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 01:02:25.281857       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.2.0/24] 
	I0717 01:02:35.281373       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 01:02:35.281613       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.2.0/24] 
	I0717 01:02:35.281909       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:02:35.282050       1 main.go:303] handling current node
	I0717 01:02:35.282103       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:02:35.282129       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5] <==
	I0717 01:01:03.278000       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0717 01:01:03.365828       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 01:01:03.367377       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:01:03.374373       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 01:01:03.378306       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:01:03.384869       1 aggregator.go:165] initial CRD sync complete...
	I0717 01:01:03.384955       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 01:01:03.384984       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:01:03.384989       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:01:03.386409       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:01:03.386522       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 01:01:03.386550       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 01:01:03.386470       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:01:03.410468       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 01:01:03.416065       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 01:01:03.416127       1 policy_source.go:224] refreshing policies
	I0717 01:01:03.472315       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:01:04.285437       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:01:05.558739       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:01:05.713363       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:01:05.740319       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:01:05.827315       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:01:05.835411       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:01:16.220193       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:01:16.271377       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16] <==
	I0717 00:59:22.311889       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	W0717 00:59:22.311176       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0717 00:59:22.311194       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0717 00:59:22.311277       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	W0717 00:59:22.313601       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0717 00:59:22.311564       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0717 00:59:22.311596       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0717 00:59:22.311607       1 establishing_controller.go:87] Shutting down EstablishingController
	I0717 00:59:22.311624       1 naming_controller.go:302] Shutting down NamingConditionController
	I0717 00:59:22.311634       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0717 00:59:22.311647       1 controller.go:167] Shutting down OpenAPI controller
	I0717 00:59:22.311661       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0717 00:59:22.311671       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0717 00:59:22.311687       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0717 00:59:22.311701       1 available_controller.go:439] Shutting down AvailableConditionController
	I0717 00:59:22.311715       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0717 00:59:22.311722       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0717 00:59:22.311732       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0717 00:59:22.311740       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0717 00:59:22.311749       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0717 00:59:22.311772       1 controller.go:129] Ending legacy_token_tracking_controller
	I0717 00:59:22.314474       1 controller.go:130] Shutting down legacy_token_tracking_controller
	W0717 00:59:22.311836       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:59:22.314580       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:59:22.314654       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe] <==
	I0717 00:55:22.829534       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m02\" does not exist"
	I0717 00:55:22.914696       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m02" podCIDRs=["10.244.1.0/24"]
	I0717 00:55:25.152230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-905682-m02"
	I0717 00:55:40.149735       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:55:42.515782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.605076ms"
	I0717 00:55:42.549993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.077312ms"
	I0717 00:55:42.550190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.208µs"
	I0717 00:55:42.550426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.17µs"
	I0717 00:55:44.470491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.892392ms"
	I0717 00:55:44.473036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="172.557µs"
	I0717 00:55:44.783458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.791422ms"
	I0717 00:55:44.784313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.428µs"
	I0717 00:56:12.375642       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m03\" does not exist"
	I0717 00:56:12.375766       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:56:12.431537       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m03" podCIDRs=["10.244.2.0/24"]
	I0717 00:56:15.364252       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-905682-m03"
	I0717 00:56:30.374110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:56:58.526774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:56:59.826093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:56:59.826258       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m03\" does not exist"
	I0717 00:56:59.839303       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m03" podCIDRs=["10.244.3.0/24"]
	I0717 00:57:16.938325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:58:00.415333       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m03"
	I0717 00:58:00.470479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.98783ms"
	I0717 00:58:00.470600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.689µs"
	
	
	==> kube-controller-manager [a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64] <==
	I0717 01:01:16.822492       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:01:16.822589       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:01:16.840958       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:01:40.445703       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.143033ms"
	I0717 01:01:40.445821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.581µs"
	I0717 01:01:40.459587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.335489ms"
	I0717 01:01:40.459674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.977µs"
	I0717 01:01:44.694319       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m02\" does not exist"
	I0717 01:01:44.709766       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m02" podCIDRs=["10.244.1.0/24"]
	I0717 01:01:45.591663       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.578µs"
	I0717 01:01:45.645429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.729µs"
	I0717 01:01:45.657042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.816µs"
	I0717 01:01:45.665023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.412µs"
	I0717 01:01:45.668753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.97µs"
	I0717 01:01:46.356812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.803µs"
	I0717 01:02:01.004960       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 01:02:01.023802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.057µs"
	I0717 01:02:01.048517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.876µs"
	I0717 01:02:02.906186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.671109ms"
	I0717 01:02:02.906471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.316µs"
	I0717 01:02:19.081127       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 01:02:20.091266       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m03\" does not exist"
	I0717 01:02:20.091374       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 01:02:20.101649       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m03" podCIDRs=["10.244.2.0/24"]
	I0717 01:02:37.425727       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	
	
	==> kube-proxy [721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a] <==
	I0717 00:54:42.206025       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:54:42.219398       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	I0717 00:54:42.265593       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:54:42.265741       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:54:42.265777       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:54:42.268470       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:54:42.268688       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:54:42.268862       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:54:42.270250       1 config.go:192] "Starting service config controller"
	I0717 00:54:42.270504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:54:42.270635       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:54:42.270707       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:54:42.271392       1 config.go:319] "Starting node config controller"
	I0717 00:54:42.272262       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:54:42.371282       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:54:42.371282       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:54:42.372713       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b] <==
	I0717 01:01:04.373362       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:01:04.408813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	I0717 01:01:04.471073       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:01:04.471176       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:01:04.471194       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:01:04.477678       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:01:04.478030       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:01:04.480158       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:01:04.481420       1 config.go:192] "Starting service config controller"
	I0717 01:01:04.481510       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:01:04.481610       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:01:04.481639       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:01:04.482698       1 config.go:319] "Starting node config controller"
	I0717 01:01:04.482707       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:01:04.582429       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:01:04.582554       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:01:04.583053       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514] <==
	E0717 00:54:23.638659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:54:23.638696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:54:23.638723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:54:23.638794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:54:23.638822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:54:23.641255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:23.641398       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:23.641260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:54:23.641534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:54:24.629550       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:54:24.629601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:54:24.664130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:54:24.664579       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:54:24.669237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:54:24.669277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:54:24.744414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:54:24.744554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:54:24.820763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:24.821278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:24.829122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:24.829189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:24.939417       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:54:24.939530       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:54:26.933405       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 00:59:22.281673       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d] <==
	I0717 01:01:01.792156       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:01:03.320101       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:01:03.320236       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:01:03.320270       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:01:03.320356       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:01:03.384248       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:01:03.387395       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:01:03.389558       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:01:03.389784       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:01:03.389848       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:01:03.389957       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:01:03.489982       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:01:00 multinode-905682 kubelet[3138]: E0717 01:01:00.480420    3138 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.36:8443: connect: connection refused
	Jul 17 01:01:01 multinode-905682 kubelet[3138]: I0717 01:01:01.031800    3138 kubelet_node_status.go:73] "Attempting to register node" node="multinode-905682"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.502555    3138 apiserver.go:52] "Watching apiserver"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.507200    3138 topology_manager.go:215] "Topology Admit Handler" podUID="8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lsqqt"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.507525    3138 topology_manager.go:215] "Topology Admit Handler" podUID="93b6c6dc-424a-4d24-aabc-6cf18acf53a9" podNamespace="kube-system" podName="kindnet-qnxcz"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.507730    3138 topology_manager.go:215] "Topology Admit Handler" podUID="801b3f18-a89d-4cfe-ae0b-29d86546a71c" podNamespace="kube-system" podName="kube-proxy-ml4v5"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.507853    3138 topology_manager.go:215] "Topology Admit Handler" podUID="08023a17-949f-4160-b54d-9239629fc0cb" podNamespace="kube-system" podName="storage-provisioner"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.508092    3138 topology_manager.go:215] "Topology Admit Handler" podUID="b3241a7b-8574-4523-a8c3-749622a7adc7" podNamespace="default" podName="busybox-fc5497c4f-l7kh7"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.530742    3138 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.531708    3138 kubelet_node_status.go:112] "Node was previously registered" node="multinode-905682"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.531856    3138 kubelet_node_status.go:76] "Successfully registered node" node="multinode-905682"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.533058    3138 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.534104    3138 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545541    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93b6c6dc-424a-4d24-aabc-6cf18acf53a9-xtables-lock\") pod \"kindnet-qnxcz\" (UID: \"93b6c6dc-424a-4d24-aabc-6cf18acf53a9\") " pod="kube-system/kindnet-qnxcz"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545631    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93b6c6dc-424a-4d24-aabc-6cf18acf53a9-lib-modules\") pod \"kindnet-qnxcz\" (UID: \"93b6c6dc-424a-4d24-aabc-6cf18acf53a9\") " pod="kube-system/kindnet-qnxcz"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545704    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08023a17-949f-4160-b54d-9239629fc0cb-tmp\") pod \"storage-provisioner\" (UID: \"08023a17-949f-4160-b54d-9239629fc0cb\") " pod="kube-system/storage-provisioner"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545759    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/801b3f18-a89d-4cfe-ae0b-29d86546a71c-xtables-lock\") pod \"kube-proxy-ml4v5\" (UID: \"801b3f18-a89d-4cfe-ae0b-29d86546a71c\") " pod="kube-system/kube-proxy-ml4v5"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545799    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/801b3f18-a89d-4cfe-ae0b-29d86546a71c-lib-modules\") pod \"kube-proxy-ml4v5\" (UID: \"801b3f18-a89d-4cfe-ae0b-29d86546a71c\") " pod="kube-system/kube-proxy-ml4v5"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545852    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/93b6c6dc-424a-4d24-aabc-6cf18acf53a9-cni-cfg\") pod \"kindnet-qnxcz\" (UID: \"93b6c6dc-424a-4d24-aabc-6cf18acf53a9\") " pod="kube-system/kindnet-qnxcz"
	Jul 17 01:01:09 multinode-905682 kubelet[3138]: I0717 01:01:09.267716    3138 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 01:01:59 multinode-905682 kubelet[3138]: E0717 01:01:59.573501    3138 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:01:59 multinode-905682 kubelet[3138]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:01:59 multinode-905682 kubelet[3138]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:01:59 multinode-905682 kubelet[3138]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:01:59 multinode-905682 kubelet[3138]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:02:39.949764   50998 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19265-12897/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-905682 -n multinode-905682
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-905682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (322.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 stop
E0717 01:04:18.740716   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-905682 stop: exit status 82 (2m0.456094088s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-905682-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-905682 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-905682 status: exit status 3 (18.701261862s)

                                                
                                                
-- stdout --
	multinode-905682
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-905682-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:05:02.992944   51664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host
	E0717 01:05:02.993011   51664 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-905682 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-905682 -n multinode-905682
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-905682 logs -n 25: (1.483384687s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m02:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682:/home/docker/cp-test_multinode-905682-m02_multinode-905682.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682 sudo cat                                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m02_multinode-905682.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m02:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03:/home/docker/cp-test_multinode-905682-m02_multinode-905682-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682-m03 sudo cat                                   | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m02_multinode-905682-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp testdata/cp-test.txt                                                | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2525639886/001/cp-test_multinode-905682-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682:/home/docker/cp-test_multinode-905682-m03_multinode-905682.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682 sudo cat                                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m03_multinode-905682.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt                       | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02:/home/docker/cp-test_multinode-905682-m03_multinode-905682-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682-m02 sudo cat                                   | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m03_multinode-905682-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-905682 node stop m03                                                          | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	| node    | multinode-905682 node start                                                             | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-905682                                                                | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:57 UTC |                     |
	| stop    | -p multinode-905682                                                                     | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:57 UTC |                     |
	| start   | -p multinode-905682                                                                     | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 00:59 UTC | 17 Jul 24 01:02 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-905682                                                                | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 01:02 UTC |                     |
	| node    | multinode-905682 node delete                                                            | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 01:02 UTC | 17 Jul 24 01:02 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-905682 stop                                                                   | multinode-905682 | jenkins | v1.33.1 | 17 Jul 24 01:02 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:59:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:59:21.416754   49910 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:59:21.417016   49910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:59:21.417025   49910 out.go:304] Setting ErrFile to fd 2...
	I0717 00:59:21.417029   49910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:59:21.417202   49910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:59:21.417762   49910 out.go:298] Setting JSON to false
	I0717 00:59:21.418682   49910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6110,"bootTime":1721171851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:59:21.418745   49910 start.go:139] virtualization: kvm guest
	I0717 00:59:21.421205   49910 out.go:177] * [multinode-905682] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:59:21.422714   49910 notify.go:220] Checking for updates...
	I0717 00:59:21.422741   49910 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:59:21.424073   49910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:59:21.425393   49910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:59:21.427035   49910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:59:21.428494   49910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:59:21.429808   49910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:59:21.431727   49910 config.go:182] Loaded profile config "multinode-905682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:59:21.431809   49910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:59:21.432202   49910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:59:21.432271   49910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:59:21.447285   49910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I0717 00:59:21.447774   49910 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:59:21.448421   49910 main.go:141] libmachine: Using API Version  1
	I0717 00:59:21.448446   49910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:59:21.448855   49910 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:59:21.449107   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 00:59:21.483446   49910 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:59:21.484900   49910 start.go:297] selected driver: kvm2
	I0717 00:59:21.484917   49910 start.go:901] validating driver "kvm2" against &{Name:multinode-905682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-905682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.142 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:59:21.485052   49910 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:59:21.485361   49910 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:59:21.485457   49910 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:59:21.499701   49910 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:59:21.500381   49910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:59:21.500411   49910 cni.go:84] Creating CNI manager for ""
	I0717 00:59:21.500422   49910 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 00:59:21.500509   49910 start.go:340] cluster config:
	{Name:multinode-905682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-905682 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.142 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:59:21.500678   49910 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:59:21.502619   49910 out.go:177] * Starting "multinode-905682" primary control-plane node in "multinode-905682" cluster
	I0717 00:59:21.503881   49910 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:59:21.503918   49910 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:59:21.503928   49910 cache.go:56] Caching tarball of preloaded images
	I0717 00:59:21.504000   49910 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:59:21.504011   49910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:59:21.504136   49910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/config.json ...
	I0717 00:59:21.504312   49910 start.go:360] acquireMachinesLock for multinode-905682: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:59:21.504349   49910 start.go:364] duration metric: took 22.013µs to acquireMachinesLock for "multinode-905682"
	I0717 00:59:21.504362   49910 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:59:21.504371   49910 fix.go:54] fixHost starting: 
	I0717 00:59:21.504654   49910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:59:21.504685   49910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:59:21.518170   49910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0717 00:59:21.518583   49910 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:59:21.519087   49910 main.go:141] libmachine: Using API Version  1
	I0717 00:59:21.519114   49910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:59:21.519398   49910 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:59:21.519554   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 00:59:21.519685   49910 main.go:141] libmachine: (multinode-905682) Calling .GetState
	I0717 00:59:21.521257   49910 fix.go:112] recreateIfNeeded on multinode-905682: state=Running err=<nil>
	W0717 00:59:21.521280   49910 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:59:21.523262   49910 out.go:177] * Updating the running kvm2 "multinode-905682" VM ...
	I0717 00:59:21.524442   49910 machine.go:94] provisionDockerMachine start ...
	I0717 00:59:21.524455   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 00:59:21.524646   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.526885   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.527287   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.527317   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.527472   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:21.527610   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.527767   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.527962   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:21.528216   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 00:59:21.528434   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 00:59:21.528448   49910 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:59:21.641899   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-905682
	
	I0717 00:59:21.641925   49910 main.go:141] libmachine: (multinode-905682) Calling .GetMachineName
	I0717 00:59:21.642151   49910 buildroot.go:166] provisioning hostname "multinode-905682"
	I0717 00:59:21.642176   49910 main.go:141] libmachine: (multinode-905682) Calling .GetMachineName
	I0717 00:59:21.642345   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.645036   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.645409   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.645434   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.645570   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:21.645753   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.645922   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.646089   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:21.646275   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 00:59:21.646434   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 00:59:21.646445   49910 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-905682 && echo "multinode-905682" | sudo tee /etc/hostname
	I0717 00:59:21.772936   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-905682
	
	I0717 00:59:21.772969   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.775470   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.775776   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.775825   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.775915   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:21.776170   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.776351   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.776529   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:21.776725   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 00:59:21.776902   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 00:59:21.776926   49910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-905682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-905682/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-905682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:59:21.881540   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:59:21.881568   49910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 00:59:21.881590   49910 buildroot.go:174] setting up certificates
	I0717 00:59:21.881600   49910 provision.go:84] configureAuth start
	I0717 00:59:21.881612   49910 main.go:141] libmachine: (multinode-905682) Calling .GetMachineName
	I0717 00:59:21.881871   49910 main.go:141] libmachine: (multinode-905682) Calling .GetIP
	I0717 00:59:21.884541   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.884923   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.884964   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.885123   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.887170   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.887489   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.887511   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.887613   49910 provision.go:143] copyHostCerts
	I0717 00:59:21.887637   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:59:21.887679   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 00:59:21.887693   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 00:59:21.887755   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 00:59:21.887857   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:59:21.887877   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 00:59:21.887883   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 00:59:21.887911   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 00:59:21.887972   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:59:21.887988   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 00:59:21.887993   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 00:59:21.888013   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 00:59:21.888073   49910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.multinode-905682 san=[127.0.0.1 192.168.39.36 localhost minikube multinode-905682]
	I0717 00:59:21.993347   49910 provision.go:177] copyRemoteCerts
	I0717 00:59:21.993413   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:59:21.993438   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:21.996035   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.996350   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:21.996388   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:21.996591   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:21.996745   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:21.996864   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:21.997035   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 00:59:22.080949   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:59:22.081019   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:59:22.106693   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:59:22.106754   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 00:59:22.131114   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:59:22.131211   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:59:22.160581   49910 provision.go:87] duration metric: took 278.96683ms to configureAuth
	I0717 00:59:22.160610   49910 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:59:22.160817   49910 config.go:182] Loaded profile config "multinode-905682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:59:22.160888   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:59:22.163989   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:22.164378   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:59:22.164405   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:59:22.164599   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:59:22.164762   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:22.164923   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:59:22.165098   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:59:22.165262   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 00:59:22.165424   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 00:59:22.165437   49910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:00:53.059796   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:00:53.059824   49910 machine.go:97] duration metric: took 1m31.535371677s to provisionDockerMachine
	I0717 01:00:53.059836   49910 start.go:293] postStartSetup for "multinode-905682" (driver="kvm2")
	I0717 01:00:53.059849   49910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:00:53.059881   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.060223   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:00:53.060243   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 01:00:53.063159   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.063487   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.063513   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.063608   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 01:00:53.063787   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.063948   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 01:00:53.064086   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 01:00:53.149985   49910 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:00:53.154685   49910 command_runner.go:130] > NAME=Buildroot
	I0717 01:00:53.154710   49910 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 01:00:53.154716   49910 command_runner.go:130] > ID=buildroot
	I0717 01:00:53.154722   49910 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 01:00:53.154729   49910 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 01:00:53.154814   49910 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:00:53.154835   49910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:00:53.154899   49910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:00:53.154965   49910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:00:53.154974   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /etc/ssl/certs/200682.pem
	I0717 01:00:53.155078   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:00:53.165157   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:00:53.190755   49910 start.go:296] duration metric: took 130.903803ms for postStartSetup
	I0717 01:00:53.190817   49910 fix.go:56] duration metric: took 1m31.686446496s for fixHost
	I0717 01:00:53.190845   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 01:00:53.193379   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.193756   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.193792   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.193997   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 01:00:53.194209   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.194372   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.194568   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 01:00:53.194736   49910 main.go:141] libmachine: Using SSH client type: native
	I0717 01:00:53.194906   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0717 01:00:53.194916   49910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:00:53.297811   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721178053.263704071
	
	I0717 01:00:53.297841   49910 fix.go:216] guest clock: 1721178053.263704071
	I0717 01:00:53.297852   49910 fix.go:229] Guest: 2024-07-17 01:00:53.263704071 +0000 UTC Remote: 2024-07-17 01:00:53.190823267 +0000 UTC m=+91.807199193 (delta=72.880804ms)
	I0717 01:00:53.297881   49910 fix.go:200] guest clock delta is within tolerance: 72.880804ms
	I0717 01:00:53.297891   49910 start.go:83] releasing machines lock for "multinode-905682", held for 1m31.793530968s
	I0717 01:00:53.297923   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.298229   49910 main.go:141] libmachine: (multinode-905682) Calling .GetIP
	I0717 01:00:53.300550   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.300952   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.300982   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.301106   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.301678   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.301835   49910 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 01:00:53.301910   49910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:00:53.301960   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 01:00:53.302062   49910 ssh_runner.go:195] Run: cat /version.json
	I0717 01:00:53.302081   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 01:00:53.304379   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.304707   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.304735   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.304759   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.304900   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 01:00:53.305068   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.305202   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:53.305224   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 01:00:53.305225   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:53.305387   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 01:00:53.305448   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 01:00:53.305596   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 01:00:53.305741   49910 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 01:00:53.305896   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 01:00:53.381477   49910 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1720578864-19219", "minikube_version": "v1.33.1", "commit": "82f9201b4da402696a199908092788c5f6c09714"}
	I0717 01:00:53.381635   49910 ssh_runner.go:195] Run: systemctl --version
	I0717 01:00:53.405269   49910 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 01:00:53.405336   49910 command_runner.go:130] > systemd 252 (252)
	I0717 01:00:53.405366   49910 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 01:00:53.405434   49910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:00:53.567925   49910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 01:00:53.575299   49910 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 01:00:53.575607   49910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:00:53.575674   49910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:00:53.584942   49910 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 01:00:53.584964   49910 start.go:495] detecting cgroup driver to use...
	I0717 01:00:53.585041   49910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:00:53.602283   49910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:00:53.616665   49910 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:00:53.616729   49910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:00:53.629988   49910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:00:53.643128   49910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:00:53.789248   49910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:00:53.927259   49910 docker.go:233] disabling docker service ...
	I0717 01:00:53.927340   49910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:00:53.944388   49910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:00:53.958802   49910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:00:54.100243   49910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:00:54.248952   49910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:00:54.263522   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:00:54.281243   49910 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 01:00:54.281628   49910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:00:54.281682   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.291991   49910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:00:54.292054   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.302324   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.312346   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.322384   49910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:00:54.332779   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.343034   49910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.353704   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:00:54.370183   49910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:00:54.398161   49910 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 01:00:54.398262   49910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:00:54.407992   49910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:00:54.548336   49910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:00:56.382898   49910 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.834525357s)
	I0717 01:00:56.382934   49910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:00:56.382988   49910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:00:56.387951   49910 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 01:00:56.387975   49910 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 01:00:56.387991   49910 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I0717 01:00:56.388001   49910 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 01:00:56.388009   49910 command_runner.go:130] > Access: 2024-07-17 01:00:56.301662268 +0000
	I0717 01:00:56.388019   49910 command_runner.go:130] > Modify: 2024-07-17 01:00:56.235659640 +0000
	I0717 01:00:56.388029   49910 command_runner.go:130] > Change: 2024-07-17 01:00:56.235659640 +0000
	I0717 01:00:56.388038   49910 command_runner.go:130] >  Birth: -
	I0717 01:00:56.388062   49910 start.go:563] Will wait 60s for crictl version
	I0717 01:00:56.388105   49910 ssh_runner.go:195] Run: which crictl
	I0717 01:00:56.392003   49910 command_runner.go:130] > /usr/bin/crictl
	I0717 01:00:56.392075   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:00:56.431538   49910 command_runner.go:130] > Version:  0.1.0
	I0717 01:00:56.431566   49910 command_runner.go:130] > RuntimeName:  cri-o
	I0717 01:00:56.431574   49910 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0717 01:00:56.431582   49910 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 01:00:56.431616   49910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:00:56.431685   49910 ssh_runner.go:195] Run: crio --version
	I0717 01:00:56.457481   49910 command_runner.go:130] > crio version 1.29.1
	I0717 01:00:56.457500   49910 command_runner.go:130] > Version:        1.29.1
	I0717 01:00:56.457506   49910 command_runner.go:130] > GitCommit:      unknown
	I0717 01:00:56.457510   49910 command_runner.go:130] > GitCommitDate:  unknown
	I0717 01:00:56.457513   49910 command_runner.go:130] > GitTreeState:   clean
	I0717 01:00:56.457520   49910 command_runner.go:130] > BuildDate:      2024-07-15T15:38:42Z
	I0717 01:00:56.457527   49910 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 01:00:56.457533   49910 command_runner.go:130] > Compiler:       gc
	I0717 01:00:56.457540   49910 command_runner.go:130] > Platform:       linux/amd64
	I0717 01:00:56.457547   49910 command_runner.go:130] > Linkmode:       dynamic
	I0717 01:00:56.457554   49910 command_runner.go:130] > BuildTags:      
	I0717 01:00:56.457563   49910 command_runner.go:130] >   containers_image_ostree_stub
	I0717 01:00:56.457567   49910 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 01:00:56.457573   49910 command_runner.go:130] >   btrfs_noversion
	I0717 01:00:56.457595   49910 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 01:00:56.457602   49910 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 01:00:56.457605   49910 command_runner.go:130] >   seccomp
	I0717 01:00:56.457612   49910 command_runner.go:130] > LDFlags:          unknown
	I0717 01:00:56.457618   49910 command_runner.go:130] > SeccompEnabled:   true
	I0717 01:00:56.457628   49910 command_runner.go:130] > AppArmorEnabled:  false
	I0717 01:00:56.458634   49910 ssh_runner.go:195] Run: crio --version
	I0717 01:00:56.486011   49910 command_runner.go:130] > crio version 1.29.1
	I0717 01:00:56.486032   49910 command_runner.go:130] > Version:        1.29.1
	I0717 01:00:56.486040   49910 command_runner.go:130] > GitCommit:      unknown
	I0717 01:00:56.486047   49910 command_runner.go:130] > GitCommitDate:  unknown
	I0717 01:00:56.486054   49910 command_runner.go:130] > GitTreeState:   clean
	I0717 01:00:56.486062   49910 command_runner.go:130] > BuildDate:      2024-07-15T15:38:42Z
	I0717 01:00:56.486066   49910 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 01:00:56.486070   49910 command_runner.go:130] > Compiler:       gc
	I0717 01:00:56.486086   49910 command_runner.go:130] > Platform:       linux/amd64
	I0717 01:00:56.486093   49910 command_runner.go:130] > Linkmode:       dynamic
	I0717 01:00:56.486098   49910 command_runner.go:130] > BuildTags:      
	I0717 01:00:56.486105   49910 command_runner.go:130] >   containers_image_ostree_stub
	I0717 01:00:56.486109   49910 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 01:00:56.486113   49910 command_runner.go:130] >   btrfs_noversion
	I0717 01:00:56.486123   49910 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 01:00:56.486129   49910 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 01:00:56.486133   49910 command_runner.go:130] >   seccomp
	I0717 01:00:56.486140   49910 command_runner.go:130] > LDFlags:          unknown
	I0717 01:00:56.486144   49910 command_runner.go:130] > SeccompEnabled:   true
	I0717 01:00:56.486150   49910 command_runner.go:130] > AppArmorEnabled:  false
	I0717 01:00:56.488148   49910 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:00:56.489426   49910 main.go:141] libmachine: (multinode-905682) Calling .GetIP
	I0717 01:00:56.492003   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:56.492340   49910 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 01:00:56.492366   49910 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 01:00:56.492580   49910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:00:56.496859   49910 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 01:00:56.496951   49910 kubeadm.go:883] updating cluster {Name:multinode-905682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-905682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.142 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:00:56.497104   49910 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:00:56.497163   49910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:00:56.553748   49910 command_runner.go:130] > {
	I0717 01:00:56.553772   49910 command_runner.go:130] >   "images": [
	I0717 01:00:56.553782   49910 command_runner.go:130] >     {
	I0717 01:00:56.553790   49910 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 01:00:56.553795   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.553801   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 01:00:56.553805   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553809   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.553817   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 01:00:56.553826   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 01:00:56.553829   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553834   49910 command_runner.go:130] >       "size": "65908273",
	I0717 01:00:56.553838   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.553842   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.553847   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.553851   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.553857   49910 command_runner.go:130] >     },
	I0717 01:00:56.553860   49910 command_runner.go:130] >     {
	I0717 01:00:56.553866   49910 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 01:00:56.553872   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.553877   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 01:00:56.553881   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553884   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.553892   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 01:00:56.553905   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 01:00:56.553910   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553914   49910 command_runner.go:130] >       "size": "87165492",
	I0717 01:00:56.553918   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.553928   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.553932   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.553937   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.553940   49910 command_runner.go:130] >     },
	I0717 01:00:56.553943   49910 command_runner.go:130] >     {
	I0717 01:00:56.553949   49910 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 01:00:56.553955   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.553960   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 01:00:56.553963   49910 command_runner.go:130] >       ],
	I0717 01:00:56.553970   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.553981   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 01:00:56.553990   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 01:00:56.553995   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554000   49910 command_runner.go:130] >       "size": "1363676",
	I0717 01:00:56.554006   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.554010   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554013   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554020   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554023   49910 command_runner.go:130] >     },
	I0717 01:00:56.554027   49910 command_runner.go:130] >     {
	I0717 01:00:56.554033   49910 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 01:00:56.554039   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554044   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 01:00:56.554050   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554053   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554062   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 01:00:56.554078   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 01:00:56.554083   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554087   49910 command_runner.go:130] >       "size": "31470524",
	I0717 01:00:56.554093   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.554097   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554102   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554106   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554109   49910 command_runner.go:130] >     },
	I0717 01:00:56.554115   49910 command_runner.go:130] >     {
	I0717 01:00:56.554121   49910 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 01:00:56.554127   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554132   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 01:00:56.554137   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554142   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554150   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 01:00:56.554159   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 01:00:56.554165   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554169   49910 command_runner.go:130] >       "size": "61245718",
	I0717 01:00:56.554175   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.554179   49910 command_runner.go:130] >       "username": "nonroot",
	I0717 01:00:56.554189   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554196   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554199   49910 command_runner.go:130] >     },
	I0717 01:00:56.554203   49910 command_runner.go:130] >     {
	I0717 01:00:56.554209   49910 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 01:00:56.554214   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554219   49910 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 01:00:56.554224   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554228   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554237   49910 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 01:00:56.554245   49910 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 01:00:56.554255   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554261   49910 command_runner.go:130] >       "size": "150779692",
	I0717 01:00:56.554265   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554271   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.554274   49910 command_runner.go:130] >       },
	I0717 01:00:56.554279   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554283   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554289   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554293   49910 command_runner.go:130] >     },
	I0717 01:00:56.554298   49910 command_runner.go:130] >     {
	I0717 01:00:56.554304   49910 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 01:00:56.554310   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554315   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 01:00:56.554320   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554324   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554333   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 01:00:56.554342   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 01:00:56.554347   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554352   49910 command_runner.go:130] >       "size": "117609954",
	I0717 01:00:56.554357   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554361   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.554366   49910 command_runner.go:130] >       },
	I0717 01:00:56.554370   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554373   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554379   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554388   49910 command_runner.go:130] >     },
	I0717 01:00:56.554394   49910 command_runner.go:130] >     {
	I0717 01:00:56.554400   49910 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 01:00:56.554406   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554411   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 01:00:56.554424   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554430   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554449   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 01:00:56.554458   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 01:00:56.554462   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554466   49910 command_runner.go:130] >       "size": "112194888",
	I0717 01:00:56.554471   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554475   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.554480   49910 command_runner.go:130] >       },
	I0717 01:00:56.554484   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554488   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554491   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554494   49910 command_runner.go:130] >     },
	I0717 01:00:56.554497   49910 command_runner.go:130] >     {
	I0717 01:00:56.554502   49910 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 01:00:56.554506   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554510   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 01:00:56.554513   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554517   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554523   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 01:00:56.554529   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 01:00:56.554532   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554536   49910 command_runner.go:130] >       "size": "85953433",
	I0717 01:00:56.554540   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.554543   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554546   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554550   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554553   49910 command_runner.go:130] >     },
	I0717 01:00:56.554558   49910 command_runner.go:130] >     {
	I0717 01:00:56.554564   49910 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 01:00:56.554570   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554579   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 01:00:56.554585   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554588   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554598   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 01:00:56.554607   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 01:00:56.554612   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554616   49910 command_runner.go:130] >       "size": "63051080",
	I0717 01:00:56.554622   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554626   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.554632   49910 command_runner.go:130] >       },
	I0717 01:00:56.554636   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554642   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554646   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.554649   49910 command_runner.go:130] >     },
	I0717 01:00:56.554654   49910 command_runner.go:130] >     {
	I0717 01:00:56.554660   49910 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 01:00:56.554666   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.554671   49910 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 01:00:56.554676   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554681   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.554688   49910 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 01:00:56.554697   49910 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 01:00:56.554702   49910 command_runner.go:130] >       ],
	I0717 01:00:56.554706   49910 command_runner.go:130] >       "size": "750414",
	I0717 01:00:56.554711   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.554716   49910 command_runner.go:130] >         "value": "65535"
	I0717 01:00:56.554721   49910 command_runner.go:130] >       },
	I0717 01:00:56.554725   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.554731   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.554735   49910 command_runner.go:130] >       "pinned": true
	I0717 01:00:56.554738   49910 command_runner.go:130] >     }
	I0717 01:00:56.554741   49910 command_runner.go:130] >   ]
	I0717 01:00:56.554746   49910 command_runner.go:130] > }
	I0717 01:00:56.555785   49910 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:00:56.555802   49910 crio.go:433] Images already preloaded, skipping extraction
	I0717 01:00:56.555858   49910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:00:56.600246   49910 command_runner.go:130] > {
	I0717 01:00:56.600271   49910 command_runner.go:130] >   "images": [
	I0717 01:00:56.600277   49910 command_runner.go:130] >     {
	I0717 01:00:56.600289   49910 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 01:00:56.600296   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600307   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 01:00:56.600311   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600316   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600324   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 01:00:56.600334   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 01:00:56.600337   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600342   49910 command_runner.go:130] >       "size": "65908273",
	I0717 01:00:56.600346   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600350   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.600357   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600367   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600374   49910 command_runner.go:130] >     },
	I0717 01:00:56.600383   49910 command_runner.go:130] >     {
	I0717 01:00:56.600392   49910 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 01:00:56.600403   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600409   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 01:00:56.600412   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600417   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600425   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 01:00:56.600435   49910 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 01:00:56.600440   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600444   49910 command_runner.go:130] >       "size": "87165492",
	I0717 01:00:56.600450   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600463   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.600473   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600482   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600491   49910 command_runner.go:130] >     },
	I0717 01:00:56.600499   49910 command_runner.go:130] >     {
	I0717 01:00:56.600512   49910 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 01:00:56.600521   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600537   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 01:00:56.600546   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600570   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600586   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 01:00:56.600600   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 01:00:56.600608   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600617   49910 command_runner.go:130] >       "size": "1363676",
	I0717 01:00:56.600627   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600637   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.600647   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600656   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600665   49910 command_runner.go:130] >     },
	I0717 01:00:56.600670   49910 command_runner.go:130] >     {
	I0717 01:00:56.600683   49910 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 01:00:56.600692   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600700   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 01:00:56.600705   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600714   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600730   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 01:00:56.600753   49910 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 01:00:56.600763   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600769   49910 command_runner.go:130] >       "size": "31470524",
	I0717 01:00:56.600775   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600783   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.600787   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600796   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600805   49910 command_runner.go:130] >     },
	I0717 01:00:56.600813   49910 command_runner.go:130] >     {
	I0717 01:00:56.600826   49910 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 01:00:56.600836   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600847   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 01:00:56.600855   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600862   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.600875   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 01:00:56.600890   49910 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 01:00:56.600899   49910 command_runner.go:130] >       ],
	I0717 01:00:56.600912   49910 command_runner.go:130] >       "size": "61245718",
	I0717 01:00:56.600921   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.600931   49910 command_runner.go:130] >       "username": "nonroot",
	I0717 01:00:56.600940   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.600948   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.600955   49910 command_runner.go:130] >     },
	I0717 01:00:56.600959   49910 command_runner.go:130] >     {
	I0717 01:00:56.600968   49910 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 01:00:56.600978   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.600989   49910 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 01:00:56.600997   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601006   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601021   49910 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 01:00:56.601034   49910 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 01:00:56.601040   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601044   49910 command_runner.go:130] >       "size": "150779692",
	I0717 01:00:56.601050   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601059   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.601068   49910 command_runner.go:130] >       },
	I0717 01:00:56.601077   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601086   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601100   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601108   49910 command_runner.go:130] >     },
	I0717 01:00:56.601116   49910 command_runner.go:130] >     {
	I0717 01:00:56.601124   49910 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 01:00:56.601129   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601137   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 01:00:56.601146   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601155   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601170   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 01:00:56.601184   49910 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 01:00:56.601192   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601201   49910 command_runner.go:130] >       "size": "117609954",
	I0717 01:00:56.601208   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601213   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.601219   49910 command_runner.go:130] >       },
	I0717 01:00:56.601235   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601244   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601254   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601261   49910 command_runner.go:130] >     },
	I0717 01:00:56.601266   49910 command_runner.go:130] >     {
	I0717 01:00:56.601279   49910 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 01:00:56.601289   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601297   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 01:00:56.601301   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601308   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601339   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 01:00:56.601354   49910 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 01:00:56.601363   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601373   49910 command_runner.go:130] >       "size": "112194888",
	I0717 01:00:56.601380   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601384   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.601392   49910 command_runner.go:130] >       },
	I0717 01:00:56.601400   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601410   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601419   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601426   49910 command_runner.go:130] >     },
	I0717 01:00:56.601434   49910 command_runner.go:130] >     {
	I0717 01:00:56.601447   49910 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 01:00:56.601455   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601465   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 01:00:56.601472   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601478   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601492   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 01:00:56.601506   49910 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 01:00:56.601514   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601522   49910 command_runner.go:130] >       "size": "85953433",
	I0717 01:00:56.601531   49910 command_runner.go:130] >       "uid": null,
	I0717 01:00:56.601545   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601552   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601556   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601565   49910 command_runner.go:130] >     },
	I0717 01:00:56.601575   49910 command_runner.go:130] >     {
	I0717 01:00:56.601588   49910 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 01:00:56.601597   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601608   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 01:00:56.601616   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601624   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601637   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 01:00:56.601647   49910 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 01:00:56.601655   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601663   49910 command_runner.go:130] >       "size": "63051080",
	I0717 01:00:56.601671   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601681   49910 command_runner.go:130] >         "value": "0"
	I0717 01:00:56.601689   49910 command_runner.go:130] >       },
	I0717 01:00:56.601698   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601707   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601715   49910 command_runner.go:130] >       "pinned": false
	I0717 01:00:56.601726   49910 command_runner.go:130] >     },
	I0717 01:00:56.601734   49910 command_runner.go:130] >     {
	I0717 01:00:56.601745   49910 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 01:00:56.601754   49910 command_runner.go:130] >       "repoTags": [
	I0717 01:00:56.601764   49910 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 01:00:56.601772   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601782   49910 command_runner.go:130] >       "repoDigests": [
	I0717 01:00:56.601796   49910 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 01:00:56.601808   49910 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 01:00:56.601814   49910 command_runner.go:130] >       ],
	I0717 01:00:56.601820   49910 command_runner.go:130] >       "size": "750414",
	I0717 01:00:56.601829   49910 command_runner.go:130] >       "uid": {
	I0717 01:00:56.601840   49910 command_runner.go:130] >         "value": "65535"
	I0717 01:00:56.601848   49910 command_runner.go:130] >       },
	I0717 01:00:56.601857   49910 command_runner.go:130] >       "username": "",
	I0717 01:00:56.601865   49910 command_runner.go:130] >       "spec": null,
	I0717 01:00:56.601874   49910 command_runner.go:130] >       "pinned": true
	I0717 01:00:56.601881   49910 command_runner.go:130] >     }
	I0717 01:00:56.601887   49910 command_runner.go:130] >   ]
	I0717 01:00:56.601893   49910 command_runner.go:130] > }
	I0717 01:00:56.602054   49910 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:00:56.602066   49910 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:00:56.602073   49910 kubeadm.go:934] updating node { 192.168.39.36 8443 v1.30.2 crio true true} ...
	I0717 01:00:56.602323   49910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-905682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-905682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:00:56.602417   49910 ssh_runner.go:195] Run: crio config
	I0717 01:00:56.642993   49910 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 01:00:56.643024   49910 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 01:00:56.643033   49910 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 01:00:56.643037   49910 command_runner.go:130] > #
	I0717 01:00:56.643047   49910 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 01:00:56.643053   49910 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 01:00:56.643059   49910 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 01:00:56.643065   49910 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 01:00:56.643069   49910 command_runner.go:130] > # reload'.
	I0717 01:00:56.643074   49910 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 01:00:56.643081   49910 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 01:00:56.643101   49910 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 01:00:56.643113   49910 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 01:00:56.643122   49910 command_runner.go:130] > [crio]
	I0717 01:00:56.643131   49910 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 01:00:56.643142   49910 command_runner.go:130] > # containers images, in this directory.
	I0717 01:00:56.643153   49910 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 01:00:56.643172   49910 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 01:00:56.643183   49910 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 01:00:56.643195   49910 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0717 01:00:56.643257   49910 command_runner.go:130] > # imagestore = ""
	I0717 01:00:56.643277   49910 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 01:00:56.643288   49910 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 01:00:56.643385   49910 command_runner.go:130] > storage_driver = "overlay"
	I0717 01:00:56.643399   49910 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 01:00:56.643408   49910 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 01:00:56.643415   49910 command_runner.go:130] > storage_option = [
	I0717 01:00:56.643513   49910 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 01:00:56.643567   49910 command_runner.go:130] > ]
	I0717 01:00:56.643597   49910 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 01:00:56.643610   49910 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 01:00:56.643871   49910 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 01:00:56.643886   49910 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 01:00:56.643895   49910 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 01:00:56.643901   49910 command_runner.go:130] > # always happen on a node reboot
	I0717 01:00:56.644130   49910 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 01:00:56.644155   49910 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 01:00:56.644165   49910 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 01:00:56.644174   49910 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 01:00:56.644353   49910 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0717 01:00:56.644371   49910 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 01:00:56.644386   49910 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 01:00:56.644630   49910 command_runner.go:130] > # internal_wipe = true
	I0717 01:00:56.644647   49910 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0717 01:00:56.644656   49910 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0717 01:00:56.644978   49910 command_runner.go:130] > # internal_repair = false
	I0717 01:00:56.644989   49910 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 01:00:56.644999   49910 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 01:00:56.645008   49910 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 01:00:56.645199   49910 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 01:00:56.645214   49910 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 01:00:56.645220   49910 command_runner.go:130] > [crio.api]
	I0717 01:00:56.645231   49910 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 01:00:56.645451   49910 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 01:00:56.645465   49910 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 01:00:56.645764   49910 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 01:00:56.645782   49910 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 01:00:56.645791   49910 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 01:00:56.645986   49910 command_runner.go:130] > # stream_port = "0"
	I0717 01:00:56.645996   49910 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 01:00:56.646261   49910 command_runner.go:130] > # stream_enable_tls = false
	I0717 01:00:56.646269   49910 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 01:00:56.646485   49910 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 01:00:56.646499   49910 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 01:00:56.646510   49910 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 01:00:56.646519   49910 command_runner.go:130] > # minutes.
	I0717 01:00:56.646642   49910 command_runner.go:130] > # stream_tls_cert = ""
	I0717 01:00:56.646653   49910 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 01:00:56.646659   49910 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 01:00:56.646907   49910 command_runner.go:130] > # stream_tls_key = ""
	I0717 01:00:56.646916   49910 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 01:00:56.646921   49910 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 01:00:56.646942   49910 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 01:00:56.647074   49910 command_runner.go:130] > # stream_tls_ca = ""
	I0717 01:00:56.647101   49910 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 01:00:56.647219   49910 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 01:00:56.647234   49910 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 01:00:56.647367   49910 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 01:00:56.647383   49910 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 01:00:56.647391   49910 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 01:00:56.647399   49910 command_runner.go:130] > [crio.runtime]
	I0717 01:00:56.647411   49910 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 01:00:56.647423   49910 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 01:00:56.647430   49910 command_runner.go:130] > # "nofile=1024:2048"
	I0717 01:00:56.647444   49910 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 01:00:56.647487   49910 command_runner.go:130] > # default_ulimits = [
	I0717 01:00:56.647621   49910 command_runner.go:130] > # ]
	I0717 01:00:56.647637   49910 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 01:00:56.647854   49910 command_runner.go:130] > # no_pivot = false
	I0717 01:00:56.647869   49910 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 01:00:56.647879   49910 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 01:00:56.647889   49910 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 01:00:56.647902   49910 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 01:00:56.647909   49910 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 01:00:56.647923   49910 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 01:00:56.647934   49910 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 01:00:56.647945   49910 command_runner.go:130] > # Cgroup setting for conmon
	I0717 01:00:56.647959   49910 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 01:00:56.647969   49910 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 01:00:56.647982   49910 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 01:00:56.647993   49910 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 01:00:56.648004   49910 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 01:00:56.648014   49910 command_runner.go:130] > conmon_env = [
	I0717 01:00:56.648023   49910 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 01:00:56.648032   49910 command_runner.go:130] > ]
	I0717 01:00:56.648040   49910 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 01:00:56.648051   49910 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 01:00:56.648064   49910 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 01:00:56.648079   49910 command_runner.go:130] > # default_env = [
	I0717 01:00:56.648092   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648101   49910 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 01:00:56.648125   49910 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0717 01:00:56.648132   49910 command_runner.go:130] > # selinux = false
	I0717 01:00:56.648138   49910 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 01:00:56.648144   49910 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 01:00:56.648154   49910 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 01:00:56.648161   49910 command_runner.go:130] > # seccomp_profile = ""
	I0717 01:00:56.648169   49910 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 01:00:56.648181   49910 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 01:00:56.648195   49910 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 01:00:56.648206   49910 command_runner.go:130] > # which might increase security.
	I0717 01:00:56.648218   49910 command_runner.go:130] > # This option is currently deprecated,
	I0717 01:00:56.648228   49910 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0717 01:00:56.648244   49910 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 01:00:56.648254   49910 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 01:00:56.648263   49910 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 01:00:56.648277   49910 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 01:00:56.648290   49910 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 01:00:56.648300   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.648310   49910 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 01:00:56.648323   49910 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 01:00:56.648333   49910 command_runner.go:130] > # the cgroup blockio controller.
	I0717 01:00:56.648340   49910 command_runner.go:130] > # blockio_config_file = ""
	I0717 01:00:56.648353   49910 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0717 01:00:56.648362   49910 command_runner.go:130] > # blockio parameters.
	I0717 01:00:56.648370   49910 command_runner.go:130] > # blockio_reload = false
	I0717 01:00:56.648382   49910 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 01:00:56.648389   49910 command_runner.go:130] > # irqbalance daemon.
	I0717 01:00:56.648394   49910 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 01:00:56.648405   49910 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0717 01:00:56.648418   49910 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0717 01:00:56.648431   49910 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0717 01:00:56.648445   49910 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0717 01:00:56.648457   49910 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 01:00:56.648470   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.648479   49910 command_runner.go:130] > # rdt_config_file = ""
	I0717 01:00:56.648488   49910 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 01:00:56.648504   49910 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 01:00:56.648577   49910 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 01:00:56.648590   49910 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 01:00:56.648601   49910 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 01:00:56.648613   49910 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 01:00:56.648622   49910 command_runner.go:130] > # will be added.
	I0717 01:00:56.648629   49910 command_runner.go:130] > # default_capabilities = [
	I0717 01:00:56.648638   49910 command_runner.go:130] > # 	"CHOWN",
	I0717 01:00:56.648643   49910 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 01:00:56.648649   49910 command_runner.go:130] > # 	"FSETID",
	I0717 01:00:56.648653   49910 command_runner.go:130] > # 	"FOWNER",
	I0717 01:00:56.648656   49910 command_runner.go:130] > # 	"SETGID",
	I0717 01:00:56.648660   49910 command_runner.go:130] > # 	"SETUID",
	I0717 01:00:56.648663   49910 command_runner.go:130] > # 	"SETPCAP",
	I0717 01:00:56.648667   49910 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 01:00:56.648671   49910 command_runner.go:130] > # 	"KILL",
	I0717 01:00:56.648674   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648681   49910 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 01:00:56.648690   49910 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 01:00:56.648694   49910 command_runner.go:130] > # add_inheritable_capabilities = false
	I0717 01:00:56.648700   49910 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 01:00:56.648707   49910 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 01:00:56.648711   49910 command_runner.go:130] > default_sysctls = [
	I0717 01:00:56.648716   49910 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0717 01:00:56.648719   49910 command_runner.go:130] > ]
	I0717 01:00:56.648723   49910 command_runner.go:130] > # List of devices on the host that a
	I0717 01:00:56.648730   49910 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 01:00:56.648734   49910 command_runner.go:130] > # allowed_devices = [
	I0717 01:00:56.648738   49910 command_runner.go:130] > # 	"/dev/fuse",
	I0717 01:00:56.648741   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648745   49910 command_runner.go:130] > # List of additional devices. specified as
	I0717 01:00:56.648752   49910 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 01:00:56.648759   49910 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 01:00:56.648764   49910 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 01:00:56.648770   49910 command_runner.go:130] > # additional_devices = [
	I0717 01:00:56.648773   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648785   49910 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 01:00:56.648791   49910 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 01:00:56.648797   49910 command_runner.go:130] > # 	"/etc/cdi",
	I0717 01:00:56.648803   49910 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 01:00:56.648806   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648812   49910 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 01:00:56.648820   49910 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 01:00:56.648824   49910 command_runner.go:130] > # Defaults to false.
	I0717 01:00:56.648828   49910 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 01:00:56.648837   49910 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 01:00:56.648844   49910 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 01:00:56.648848   49910 command_runner.go:130] > # hooks_dir = [
	I0717 01:00:56.648941   49910 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 01:00:56.648952   49910 command_runner.go:130] > # ]
	I0717 01:00:56.648961   49910 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 01:00:56.648971   49910 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 01:00:56.648979   49910 command_runner.go:130] > # its default mounts from the following two files:
	I0717 01:00:56.648987   49910 command_runner.go:130] > #
	I0717 01:00:56.648997   49910 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 01:00:56.649008   49910 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 01:00:56.649017   49910 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 01:00:56.649026   49910 command_runner.go:130] > #
	I0717 01:00:56.649041   49910 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 01:00:56.649054   49910 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 01:00:56.649067   49910 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 01:00:56.649077   49910 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 01:00:56.649083   49910 command_runner.go:130] > #
	I0717 01:00:56.649094   49910 command_runner.go:130] > # default_mounts_file = ""
	I0717 01:00:56.649104   49910 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 01:00:56.649110   49910 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 01:00:56.649114   49910 command_runner.go:130] > pids_limit = 1024
	I0717 01:00:56.649120   49910 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 01:00:56.649127   49910 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 01:00:56.649139   49910 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 01:00:56.649154   49910 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 01:00:56.649163   49910 command_runner.go:130] > # log_size_max = -1
	I0717 01:00:56.649180   49910 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0717 01:00:56.649190   49910 command_runner.go:130] > # log_to_journald = false
	I0717 01:00:56.649202   49910 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 01:00:56.649210   49910 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 01:00:56.649219   49910 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 01:00:56.649229   49910 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 01:00:56.649238   49910 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 01:00:56.649248   49910 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 01:00:56.649256   49910 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 01:00:56.649263   49910 command_runner.go:130] > # read_only = false
	I0717 01:00:56.649269   49910 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 01:00:56.649275   49910 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 01:00:56.649279   49910 command_runner.go:130] > # live configuration reload.
	I0717 01:00:56.649283   49910 command_runner.go:130] > # log_level = "info"
	I0717 01:00:56.649288   49910 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 01:00:56.649299   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.649304   49910 command_runner.go:130] > # log_filter = ""
	I0717 01:00:56.649310   49910 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 01:00:56.649316   49910 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 01:00:56.649321   49910 command_runner.go:130] > # separated by comma.
	I0717 01:00:56.649328   49910 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:00:56.649335   49910 command_runner.go:130] > # uid_mappings = ""
	I0717 01:00:56.649341   49910 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 01:00:56.649349   49910 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 01:00:56.649353   49910 command_runner.go:130] > # separated by comma.
	I0717 01:00:56.649362   49910 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:00:56.649366   49910 command_runner.go:130] > # gid_mappings = ""
	I0717 01:00:56.649372   49910 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 01:00:56.649380   49910 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 01:00:56.649385   49910 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 01:00:56.649394   49910 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:00:56.649398   49910 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 01:00:56.649405   49910 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 01:00:56.649411   49910 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 01:00:56.649417   49910 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 01:00:56.649424   49910 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:00:56.649435   49910 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 01:00:56.649443   49910 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 01:00:56.649449   49910 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 01:00:56.649457   49910 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 01:00:56.649461   49910 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 01:00:56.649466   49910 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 01:00:56.649473   49910 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 01:00:56.649478   49910 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 01:00:56.649484   49910 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 01:00:56.649488   49910 command_runner.go:130] > drop_infra_ctr = false
	I0717 01:00:56.649496   49910 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 01:00:56.649501   49910 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 01:00:56.649510   49910 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 01:00:56.649515   49910 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 01:00:56.649523   49910 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0717 01:00:56.649528   49910 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0717 01:00:56.649535   49910 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0717 01:00:56.649540   49910 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0717 01:00:56.649546   49910 command_runner.go:130] > # shared_cpuset = ""
	I0717 01:00:56.649551   49910 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 01:00:56.649556   49910 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 01:00:56.649560   49910 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 01:00:56.649566   49910 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 01:00:56.649571   49910 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 01:00:56.649576   49910 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0717 01:00:56.649587   49910 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0717 01:00:56.649592   49910 command_runner.go:130] > # enable_criu_support = false
	I0717 01:00:56.649597   49910 command_runner.go:130] > # Enable/disable the generation of the container,
	I0717 01:00:56.649603   49910 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0717 01:00:56.649608   49910 command_runner.go:130] > # enable_pod_events = false
	I0717 01:00:56.649613   49910 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 01:00:56.649620   49910 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 01:00:56.649625   49910 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0717 01:00:56.649631   49910 command_runner.go:130] > # default_runtime = "runc"
	I0717 01:00:56.649636   49910 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 01:00:56.649644   49910 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 01:00:56.649658   49910 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0717 01:00:56.649665   49910 command_runner.go:130] > # creation as a file is not desired either.
	I0717 01:00:56.649673   49910 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 01:00:56.649679   49910 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 01:00:56.649683   49910 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 01:00:56.649686   49910 command_runner.go:130] > # ]
	I0717 01:00:56.649692   49910 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 01:00:56.649700   49910 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 01:00:56.649706   49910 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0717 01:00:56.649712   49910 command_runner.go:130] > # Each entry in the table should follow the format:
	I0717 01:00:56.649715   49910 command_runner.go:130] > #
	I0717 01:00:56.649720   49910 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0717 01:00:56.649724   49910 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0717 01:00:56.649774   49910 command_runner.go:130] > # runtime_type = "oci"
	I0717 01:00:56.649781   49910 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0717 01:00:56.649785   49910 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0717 01:00:56.649789   49910 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0717 01:00:56.649793   49910 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0717 01:00:56.649797   49910 command_runner.go:130] > # monitor_env = []
	I0717 01:00:56.649802   49910 command_runner.go:130] > # privileged_without_host_devices = false
	I0717 01:00:56.649808   49910 command_runner.go:130] > # allowed_annotations = []
	I0717 01:00:56.649813   49910 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0717 01:00:56.649818   49910 command_runner.go:130] > # Where:
	I0717 01:00:56.649825   49910 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0717 01:00:56.649835   49910 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0717 01:00:56.649843   49910 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 01:00:56.649849   49910 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 01:00:56.649854   49910 command_runner.go:130] > #   in $PATH.
	I0717 01:00:56.649860   49910 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0717 01:00:56.649867   49910 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 01:00:56.649874   49910 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0717 01:00:56.649882   49910 command_runner.go:130] > #   state.
	I0717 01:00:56.649891   49910 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 01:00:56.649902   49910 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 01:00:56.649912   49910 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 01:00:56.649924   49910 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 01:00:56.649941   49910 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 01:00:56.649950   49910 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 01:00:56.649954   49910 command_runner.go:130] > #   The currently recognized values are:
	I0717 01:00:56.649963   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 01:00:56.649970   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 01:00:56.649978   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 01:00:56.649983   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 01:00:56.649999   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 01:00:56.650013   49910 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 01:00:56.650025   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0717 01:00:56.650037   49910 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0717 01:00:56.650048   49910 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 01:00:56.650055   49910 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0717 01:00:56.650059   49910 command_runner.go:130] > #   deprecated option "conmon".
	I0717 01:00:56.650068   49910 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0717 01:00:56.650073   49910 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0717 01:00:56.650081   49910 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0717 01:00:56.650088   49910 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 01:00:56.650098   49910 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0717 01:00:56.650109   49910 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0717 01:00:56.650120   49910 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0717 01:00:56.650132   49910 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0717 01:00:56.650137   49910 command_runner.go:130] > #
	I0717 01:00:56.650145   49910 command_runner.go:130] > # Using the seccomp notifier feature:
	I0717 01:00:56.650153   49910 command_runner.go:130] > #
	I0717 01:00:56.650162   49910 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0717 01:00:56.650171   49910 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0717 01:00:56.650174   49910 command_runner.go:130] > #
	I0717 01:00:56.650180   49910 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0717 01:00:56.650189   49910 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0717 01:00:56.650194   49910 command_runner.go:130] > #
	I0717 01:00:56.650207   49910 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0717 01:00:56.650212   49910 command_runner.go:130] > # feature.
	I0717 01:00:56.650220   49910 command_runner.go:130] > #
	I0717 01:00:56.650230   49910 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0717 01:00:56.650246   49910 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0717 01:00:56.650263   49910 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0717 01:00:56.650274   49910 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0717 01:00:56.650282   49910 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0717 01:00:56.650287   49910 command_runner.go:130] > #
	I0717 01:00:56.650297   49910 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0717 01:00:56.650310   49910 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0717 01:00:56.650314   49910 command_runner.go:130] > #
	I0717 01:00:56.650325   49910 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0717 01:00:56.650337   49910 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0717 01:00:56.650345   49910 command_runner.go:130] > #
	I0717 01:00:56.650355   49910 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0717 01:00:56.650367   49910 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0717 01:00:56.650375   49910 command_runner.go:130] > # limitation.
	I0717 01:00:56.650382   49910 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 01:00:56.650386   49910 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 01:00:56.650395   49910 command_runner.go:130] > runtime_type = "oci"
	I0717 01:00:56.650405   49910 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 01:00:56.650411   49910 command_runner.go:130] > runtime_config_path = ""
	I0717 01:00:56.650422   49910 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0717 01:00:56.650430   49910 command_runner.go:130] > monitor_cgroup = "pod"
	I0717 01:00:56.650442   49910 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 01:00:56.650451   49910 command_runner.go:130] > monitor_env = [
	I0717 01:00:56.650463   49910 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 01:00:56.650470   49910 command_runner.go:130] > ]
	I0717 01:00:56.650475   49910 command_runner.go:130] > privileged_without_host_devices = false
	I0717 01:00:56.650483   49910 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 01:00:56.650494   49910 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 01:00:56.650505   49910 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 01:00:56.650520   49910 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 01:00:56.650535   49910 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 01:00:56.650546   49910 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 01:00:56.650562   49910 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 01:00:56.650573   49910 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 01:00:56.650581   49910 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 01:00:56.650595   49910 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 01:00:56.650602   49910 command_runner.go:130] > # Example:
	I0717 01:00:56.650616   49910 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 01:00:56.650624   49910 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 01:00:56.650632   49910 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 01:00:56.650640   49910 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 01:00:56.650645   49910 command_runner.go:130] > # cpuset = 0
	I0717 01:00:56.650651   49910 command_runner.go:130] > # cpushares = "0-1"
	I0717 01:00:56.650655   49910 command_runner.go:130] > # Where:
	I0717 01:00:56.650659   49910 command_runner.go:130] > # The workload name is workload-type.
	I0717 01:00:56.650667   49910 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 01:00:56.650675   49910 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 01:00:56.650684   49910 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 01:00:56.650697   49910 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 01:00:56.650706   49910 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 01:00:56.650713   49910 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0717 01:00:56.650722   49910 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0717 01:00:56.650729   49910 command_runner.go:130] > # Default value is set to true
	I0717 01:00:56.650736   49910 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0717 01:00:56.650742   49910 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0717 01:00:56.650746   49910 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0717 01:00:56.650750   49910 command_runner.go:130] > # Default value is set to 'false'
	I0717 01:00:56.650757   49910 command_runner.go:130] > # disable_hostport_mapping = false
	I0717 01:00:56.650767   49910 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 01:00:56.650775   49910 command_runner.go:130] > #
	I0717 01:00:56.650784   49910 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 01:00:56.650796   49910 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 01:00:56.650805   49910 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 01:00:56.650818   49910 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 01:00:56.650827   49910 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 01:00:56.650831   49910 command_runner.go:130] > [crio.image]
	I0717 01:00:56.650846   49910 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 01:00:56.650857   49910 command_runner.go:130] > # default_transport = "docker://"
	I0717 01:00:56.650867   49910 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 01:00:56.650880   49910 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 01:00:56.650889   49910 command_runner.go:130] > # global_auth_file = ""
	I0717 01:00:56.650897   49910 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 01:00:56.650907   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.650924   49910 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0717 01:00:56.650933   49910 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 01:00:56.650941   49910 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 01:00:56.650953   49910 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:00:56.650960   49910 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 01:00:56.650977   49910 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 01:00:56.650989   49910 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 01:00:56.651001   49910 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 01:00:56.651013   49910 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 01:00:56.651021   49910 command_runner.go:130] > # pause_command = "/pause"
	I0717 01:00:56.651027   49910 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0717 01:00:56.651038   49910 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0717 01:00:56.651050   49910 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0717 01:00:56.651062   49910 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0717 01:00:56.651075   49910 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0717 01:00:56.651087   49910 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0717 01:00:56.651096   49910 command_runner.go:130] > # pinned_images = [
	I0717 01:00:56.651101   49910 command_runner.go:130] > # ]
	I0717 01:00:56.651116   49910 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 01:00:56.651124   49910 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 01:00:56.651133   49910 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 01:00:56.651145   49910 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 01:00:56.651154   49910 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 01:00:56.651164   49910 command_runner.go:130] > # signature_policy = ""
	I0717 01:00:56.651173   49910 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0717 01:00:56.651186   49910 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0717 01:00:56.651198   49910 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0717 01:00:56.651224   49910 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0717 01:00:56.651239   49910 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0717 01:00:56.651247   49910 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0717 01:00:56.651259   49910 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 01:00:56.651272   49910 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 01:00:56.651281   49910 command_runner.go:130] > # changing them here.
	I0717 01:00:56.651287   49910 command_runner.go:130] > # insecure_registries = [
	I0717 01:00:56.651295   49910 command_runner.go:130] > # ]
	I0717 01:00:56.651304   49910 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 01:00:56.651317   49910 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 01:00:56.651327   49910 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 01:00:56.651335   49910 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 01:00:56.651342   49910 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 01:00:56.651354   49910 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 01:00:56.651363   49910 command_runner.go:130] > # CNI plugins.
	I0717 01:00:56.651369   49910 command_runner.go:130] > [crio.network]
	I0717 01:00:56.651379   49910 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 01:00:56.651391   49910 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 01:00:56.651399   49910 command_runner.go:130] > # cni_default_network = ""
	I0717 01:00:56.651405   49910 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 01:00:56.651415   49910 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 01:00:56.651427   49910 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 01:00:56.651433   49910 command_runner.go:130] > # plugin_dirs = [
	I0717 01:00:56.651442   49910 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 01:00:56.651454   49910 command_runner.go:130] > # ]
	I0717 01:00:56.651465   49910 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 01:00:56.651474   49910 command_runner.go:130] > [crio.metrics]
	I0717 01:00:56.651481   49910 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 01:00:56.651490   49910 command_runner.go:130] > enable_metrics = true
	I0717 01:00:56.651495   49910 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 01:00:56.651502   49910 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 01:00:56.651511   49910 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 01:00:56.651524   49910 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 01:00:56.651537   49910 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 01:00:56.651546   49910 command_runner.go:130] > # metrics_collectors = [
	I0717 01:00:56.651554   49910 command_runner.go:130] > # 	"operations",
	I0717 01:00:56.651564   49910 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 01:00:56.651571   49910 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 01:00:56.651581   49910 command_runner.go:130] > # 	"operations_errors",
	I0717 01:00:56.651587   49910 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 01:00:56.651594   49910 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 01:00:56.651599   49910 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 01:00:56.651609   49910 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 01:00:56.651617   49910 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 01:00:56.651623   49910 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 01:00:56.651638   49910 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 01:00:56.651648   49910 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0717 01:00:56.651657   49910 command_runner.go:130] > # 	"containers_oom_total",
	I0717 01:00:56.651664   49910 command_runner.go:130] > # 	"containers_oom",
	I0717 01:00:56.651672   49910 command_runner.go:130] > # 	"processes_defunct",
	I0717 01:00:56.651679   49910 command_runner.go:130] > # 	"operations_total",
	I0717 01:00:56.651687   49910 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 01:00:56.651691   49910 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 01:00:56.651700   49910 command_runner.go:130] > # 	"operations_errors_total",
	I0717 01:00:56.651707   49910 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 01:00:56.651718   49910 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 01:00:56.651724   49910 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 01:00:56.651733   49910 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 01:00:56.651740   49910 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 01:00:56.651750   49910 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 01:00:56.651759   49910 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0717 01:00:56.651768   49910 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0717 01:00:56.651773   49910 command_runner.go:130] > # ]
	I0717 01:00:56.651783   49910 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 01:00:56.651789   49910 command_runner.go:130] > # metrics_port = 9090
	I0717 01:00:56.651794   49910 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 01:00:56.651802   49910 command_runner.go:130] > # metrics_socket = ""
	I0717 01:00:56.651814   49910 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 01:00:56.651824   49910 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 01:00:56.651842   49910 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 01:00:56.651859   49910 command_runner.go:130] > # certificate on any modification event.
	I0717 01:00:56.651868   49910 command_runner.go:130] > # metrics_cert = ""
	I0717 01:00:56.651876   49910 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 01:00:56.651887   49910 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 01:00:56.651896   49910 command_runner.go:130] > # metrics_key = ""
	I0717 01:00:56.651914   49910 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 01:00:56.651923   49910 command_runner.go:130] > [crio.tracing]
	I0717 01:00:56.651932   49910 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 01:00:56.651941   49910 command_runner.go:130] > # enable_tracing = false
	I0717 01:00:56.651951   49910 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 01:00:56.651961   49910 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 01:00:56.651985   49910 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0717 01:00:56.651996   49910 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 01:00:56.652005   49910 command_runner.go:130] > # CRI-O NRI configuration.
	I0717 01:00:56.652013   49910 command_runner.go:130] > [crio.nri]
	I0717 01:00:56.652021   49910 command_runner.go:130] > # Globally enable or disable NRI.
	I0717 01:00:56.652028   49910 command_runner.go:130] > # enable_nri = false
	I0717 01:00:56.652035   49910 command_runner.go:130] > # NRI socket to listen on.
	I0717 01:00:56.652046   49910 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0717 01:00:56.652056   49910 command_runner.go:130] > # NRI plugin directory to use.
	I0717 01:00:56.652067   49910 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0717 01:00:56.652074   49910 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0717 01:00:56.652084   49910 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0717 01:00:56.652097   49910 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0717 01:00:56.652105   49910 command_runner.go:130] > # nri_disable_connections = false
	I0717 01:00:56.652111   49910 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0717 01:00:56.652116   49910 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0717 01:00:56.652123   49910 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0717 01:00:56.652128   49910 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0717 01:00:56.652137   49910 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 01:00:56.652145   49910 command_runner.go:130] > [crio.stats]
	I0717 01:00:56.652157   49910 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 01:00:56.652169   49910 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 01:00:56.652179   49910 command_runner.go:130] > # stats_collection_period = 0
	I0717 01:00:56.652217   49910 command_runner.go:130] ! time="2024-07-17 01:00:56.600744877Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0717 01:00:56.652233   49910 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 01:00:56.652408   49910 cni.go:84] Creating CNI manager for ""
	I0717 01:00:56.652422   49910 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 01:00:56.652431   49910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:00:56.652452   49910 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-905682 NodeName:multinode-905682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:00:56.652615   49910 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-905682"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:00:56.652672   49910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:00:56.662674   49910 command_runner.go:130] > kubeadm
	I0717 01:00:56.662696   49910 command_runner.go:130] > kubectl
	I0717 01:00:56.662702   49910 command_runner.go:130] > kubelet
	I0717 01:00:56.662740   49910 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:00:56.662783   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:00:56.671753   49910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0717 01:00:56.687947   49910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:00:56.704313   49910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 01:00:56.720853   49910 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0717 01:00:56.724527   49910 command_runner.go:130] > 192.168.39.36	control-plane.minikube.internal
	I0717 01:00:56.724604   49910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:00:56.860730   49910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:00:56.876014   49910 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682 for IP: 192.168.39.36
	I0717 01:00:56.876059   49910 certs.go:194] generating shared ca certs ...
	I0717 01:00:56.876097   49910 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:00:56.876423   49910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:00:56.876520   49910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:00:56.876533   49910 certs.go:256] generating profile certs ...
	I0717 01:00:56.876672   49910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/client.key
	I0717 01:00:56.876751   49910 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.key.bbaa5003
	I0717 01:00:56.876797   49910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.key
	I0717 01:00:56.876812   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 01:00:56.876831   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 01:00:56.876848   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 01:00:56.876864   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 01:00:56.876879   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 01:00:56.876899   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 01:00:56.876917   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 01:00:56.876933   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 01:00:56.876993   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:00:56.877031   49910 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:00:56.877043   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:00:56.877076   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:00:56.877153   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:00:56.877193   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:00:56.877248   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:00:56.877287   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem -> /usr/share/ca-certificates/20068.pem
	I0717 01:00:56.877304   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> /usr/share/ca-certificates/200682.pem
	I0717 01:00:56.877320   49910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:56.878208   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:00:56.902519   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:00:56.925183   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:00:56.948288   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:00:56.971027   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:00:56.996074   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:00:57.019717   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:00:57.045330   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/multinode-905682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:00:57.070124   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:00:57.095222   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:00:57.119257   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:00:57.142108   49910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:00:57.158132   49910 ssh_runner.go:195] Run: openssl version
	I0717 01:00:57.164070   49910 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 01:00:57.164166   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:00:57.174750   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:00:57.179075   49910 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:00:57.179095   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:00:57.179137   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:00:57.184502   49910 command_runner.go:130] > 51391683
	I0717 01:00:57.184603   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:00:57.193432   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:00:57.203397   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:00:57.207341   49910 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:00:57.207426   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:00:57.207456   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:00:57.212585   49910 command_runner.go:130] > 3ec20f2e
	I0717 01:00:57.212775   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:00:57.221390   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:00:57.231560   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:57.236061   49910 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:57.236087   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:57.236122   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:00:57.241689   49910 command_runner.go:130] > b5213941
	I0717 01:00:57.241764   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:00:57.251294   49910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:00:57.256083   49910 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:00:57.256108   49910 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 01:00:57.256117   49910 command_runner.go:130] > Device: 253,1	Inode: 1057301     Links: 1
	I0717 01:00:57.256127   49910 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 01:00:57.256162   49910 command_runner.go:130] > Access: 2024-07-17 00:54:16.870072117 +0000
	I0717 01:00:57.256179   49910 command_runner.go:130] > Modify: 2024-07-17 00:54:16.870072117 +0000
	I0717 01:00:57.256193   49910 command_runner.go:130] > Change: 2024-07-17 00:54:16.870072117 +0000
	I0717 01:00:57.256200   49910 command_runner.go:130] >  Birth: 2024-07-17 00:54:16.870072117 +0000
	I0717 01:00:57.256253   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:00:57.261687   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.261834   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:00:57.267231   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.267302   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:00:57.273037   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.273314   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:00:57.278571   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.278707   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:00:57.284313   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.284476   49910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:00:57.290138   49910 command_runner.go:130] > Certificate will not expire
	I0717 01:00:57.290378   49910 kubeadm.go:392] StartCluster: {Name:multinode-905682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-905682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.142 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:00:57.290482   49910 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:00:57.290529   49910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:00:57.334763   49910 command_runner.go:130] > 9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40
	I0717 01:00:57.334792   49910 command_runner.go:130] > c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad
	I0717 01:00:57.334801   49910 command_runner.go:130] > b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d
	I0717 01:00:57.334807   49910 command_runner.go:130] > 721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a
	I0717 01:00:57.334813   49910 command_runner.go:130] > 1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514
	I0717 01:00:57.334818   49910 command_runner.go:130] > 6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe
	I0717 01:00:57.334823   49910 command_runner.go:130] > d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16
	I0717 01:00:57.334830   49910 command_runner.go:130] > aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e
	I0717 01:00:57.334850   49910 cri.go:89] found id: "9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40"
	I0717 01:00:57.334861   49910 cri.go:89] found id: "c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad"
	I0717 01:00:57.334866   49910 cri.go:89] found id: "b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d"
	I0717 01:00:57.334871   49910 cri.go:89] found id: "721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a"
	I0717 01:00:57.334875   49910 cri.go:89] found id: "1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514"
	I0717 01:00:57.334879   49910 cri.go:89] found id: "6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe"
	I0717 01:00:57.334884   49910 cri.go:89] found id: "d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16"
	I0717 01:00:57.334890   49910 cri.go:89] found id: "aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e"
	I0717 01:00:57.334893   49910 cri.go:89] found id: ""
	I0717 01:00:57.334932   49910 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.594862314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178303594839725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df9c3e78-8af8-4543-8de0-58debf1290bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.595638048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b4cec26-d348-4391-a932-0432f81ce04b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.595695856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b4cec26-d348-4391-a932-0432f81ce04b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.596125059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df6e185f483edcd114c5b5e1e069a749aa09fae6aea83a64ca5f00aa3aabe122,PodSandboxId:2c42a8a363a16c1561b4f191e85d2f9e4640c3dcdc85c1420c24fb4df0310f1a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721178097788168943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779,PodSandboxId:fa902dff8a01f1da5b3fde6de6ac65d4e90d9f8e3c0f911d9abe06cb4b7deb1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721178064307786299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c,PodSandboxId:0075107f7665f391900a232f69c36f579cb4c0a44ba25b65ef771987bfc97c63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721178064255156031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18
acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b,PodSandboxId:8b1ebf053e2c9f73ea821ef74c1a72efa09b99dec311f5d57cc9e350a6d9ca40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721178064115713526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]
string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b81e196cebfdae7bd4c4fea9f33fb3032641523486a3af91af989984dc20a83,PodSandboxId:cb53c056124ab490e1a5a107da3c7088a65775cd8e69e9d89689135aebfa2aa0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178064147281768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5,PodSandboxId:b1359287405ff1a0bcff6a64a87a47e6a90edb07d269f1806e95b7e5e23df21e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721178060352264994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6bab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a493abca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d,PodSandboxId:e42479eaa869c667fe11416b5f4f1c71cc7d94cc889e2931ca0d51f87edb600e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721178060273640525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838
e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64,PodSandboxId:0f09420e5dbdb83609705535eabbea00df36dbe358988ba512151614e6cefab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721178060187402452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839952580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677,PodSandboxId:ca608ede8b04ed625008072c417ba75de623976f4ffbac722578006ff6007dfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721178060167184071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a6fadd9efc4798dd9696ee44a8d4904525114a1b7f68c3f1eb84af01d321b0,PodSandboxId:6f750ea9aba5f5a09faeeb78de83406a0ca1c80f325c37d33e49abb36353ccd0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177743825240928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40,PodSandboxId:80dbc679f84500871a825d3df2b7f343feea793940183b016ec69ead09dfd547,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177695984538433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad,PodSandboxId:90dbf5cb53d63b1007b96ac2f15b3ab5addf7c91c47aee5e35979b408bdf7c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177695971674238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d,PodSandboxId:f9ff89031ae51ec6fa95a38321345b3aa2bc57bf8751c5088f062689405608ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721177684024521477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a,PodSandboxId:d5505d12e4eea6371d163a87a0a0fad1a36f4638a55faa0ec6bd8670095c9a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177682037117710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514,PodSandboxId:9ab38025fdf530e56aa514af0c177da6084a28291f77921d67bc21075d30978b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721177661075973367,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16,PodSandboxId:6ba4af0d3ccef8a42ebd9e065840321dbe05bbbfdb4264d02d4fb4560fe448fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177660981349473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6b
ab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.container.hash: a493abca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe,PodSandboxId:0537e60ae6cb6e20e256f53a4c96c5849b669bc05f3633b31dcd1dae06faa155,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177661017336925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83995
2580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e,PodSandboxId:7294fa65d3f2282a6e67ad2366363868614dd6aee47e40788668d22f29d60892,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177660961296009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map
[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b4cec26-d348-4391-a932-0432f81ce04b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.638263396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da0b54c8-2cb8-4c59-89eb-bac89e989f7d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.638342880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da0b54c8-2cb8-4c59-89eb-bac89e989f7d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.639506350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd345a1e-433b-480a-82b6-508f620cf785 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.640215434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178303639901499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd345a1e-433b-480a-82b6-508f620cf785 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.640861852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bc875d9-1f3c-43b5-b2b4-573feb17f15c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.640958440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bc875d9-1f3c-43b5-b2b4-573feb17f15c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.641332609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df6e185f483edcd114c5b5e1e069a749aa09fae6aea83a64ca5f00aa3aabe122,PodSandboxId:2c42a8a363a16c1561b4f191e85d2f9e4640c3dcdc85c1420c24fb4df0310f1a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721178097788168943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779,PodSandboxId:fa902dff8a01f1da5b3fde6de6ac65d4e90d9f8e3c0f911d9abe06cb4b7deb1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721178064307786299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c,PodSandboxId:0075107f7665f391900a232f69c36f579cb4c0a44ba25b65ef771987bfc97c63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721178064255156031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18
acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b,PodSandboxId:8b1ebf053e2c9f73ea821ef74c1a72efa09b99dec311f5d57cc9e350a6d9ca40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721178064115713526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]
string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b81e196cebfdae7bd4c4fea9f33fb3032641523486a3af91af989984dc20a83,PodSandboxId:cb53c056124ab490e1a5a107da3c7088a65775cd8e69e9d89689135aebfa2aa0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178064147281768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5,PodSandboxId:b1359287405ff1a0bcff6a64a87a47e6a90edb07d269f1806e95b7e5e23df21e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721178060352264994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6bab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a493abca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d,PodSandboxId:e42479eaa869c667fe11416b5f4f1c71cc7d94cc889e2931ca0d51f87edb600e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721178060273640525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838
e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64,PodSandboxId:0f09420e5dbdb83609705535eabbea00df36dbe358988ba512151614e6cefab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721178060187402452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839952580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677,PodSandboxId:ca608ede8b04ed625008072c417ba75de623976f4ffbac722578006ff6007dfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721178060167184071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a6fadd9efc4798dd9696ee44a8d4904525114a1b7f68c3f1eb84af01d321b0,PodSandboxId:6f750ea9aba5f5a09faeeb78de83406a0ca1c80f325c37d33e49abb36353ccd0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177743825240928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40,PodSandboxId:80dbc679f84500871a825d3df2b7f343feea793940183b016ec69ead09dfd547,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177695984538433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad,PodSandboxId:90dbf5cb53d63b1007b96ac2f15b3ab5addf7c91c47aee5e35979b408bdf7c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177695971674238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d,PodSandboxId:f9ff89031ae51ec6fa95a38321345b3aa2bc57bf8751c5088f062689405608ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721177684024521477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a,PodSandboxId:d5505d12e4eea6371d163a87a0a0fad1a36f4638a55faa0ec6bd8670095c9a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177682037117710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514,PodSandboxId:9ab38025fdf530e56aa514af0c177da6084a28291f77921d67bc21075d30978b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721177661075973367,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16,PodSandboxId:6ba4af0d3ccef8a42ebd9e065840321dbe05bbbfdb4264d02d4fb4560fe448fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177660981349473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6b
ab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.container.hash: a493abca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe,PodSandboxId:0537e60ae6cb6e20e256f53a4c96c5849b669bc05f3633b31dcd1dae06faa155,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177661017336925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83995
2580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e,PodSandboxId:7294fa65d3f2282a6e67ad2366363868614dd6aee47e40788668d22f29d60892,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177660961296009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map
[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bc875d9-1f3c-43b5-b2b4-573feb17f15c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.685016353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e062637-2ef2-4f04-8c7a-fea710d757a3 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.685096475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e062637-2ef2-4f04-8c7a-fea710d757a3 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.686480349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=166f7447-7e01-45fb-83b3-a13e08c92594 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.687115468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178303687089225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=166f7447-7e01-45fb-83b3-a13e08c92594 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.687570534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bd43ce1-0a45-41b2-9ccc-709fa54f0b8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.687671272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bd43ce1-0a45-41b2-9ccc-709fa54f0b8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.689284402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df6e185f483edcd114c5b5e1e069a749aa09fae6aea83a64ca5f00aa3aabe122,PodSandboxId:2c42a8a363a16c1561b4f191e85d2f9e4640c3dcdc85c1420c24fb4df0310f1a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721178097788168943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779,PodSandboxId:fa902dff8a01f1da5b3fde6de6ac65d4e90d9f8e3c0f911d9abe06cb4b7deb1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721178064307786299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c,PodSandboxId:0075107f7665f391900a232f69c36f579cb4c0a44ba25b65ef771987bfc97c63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721178064255156031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18
acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b,PodSandboxId:8b1ebf053e2c9f73ea821ef74c1a72efa09b99dec311f5d57cc9e350a6d9ca40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721178064115713526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]
string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b81e196cebfdae7bd4c4fea9f33fb3032641523486a3af91af989984dc20a83,PodSandboxId:cb53c056124ab490e1a5a107da3c7088a65775cd8e69e9d89689135aebfa2aa0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178064147281768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5,PodSandboxId:b1359287405ff1a0bcff6a64a87a47e6a90edb07d269f1806e95b7e5e23df21e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721178060352264994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6bab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a493abca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d,PodSandboxId:e42479eaa869c667fe11416b5f4f1c71cc7d94cc889e2931ca0d51f87edb600e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721178060273640525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838
e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64,PodSandboxId:0f09420e5dbdb83609705535eabbea00df36dbe358988ba512151614e6cefab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721178060187402452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839952580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677,PodSandboxId:ca608ede8b04ed625008072c417ba75de623976f4ffbac722578006ff6007dfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721178060167184071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a6fadd9efc4798dd9696ee44a8d4904525114a1b7f68c3f1eb84af01d321b0,PodSandboxId:6f750ea9aba5f5a09faeeb78de83406a0ca1c80f325c37d33e49abb36353ccd0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177743825240928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40,PodSandboxId:80dbc679f84500871a825d3df2b7f343feea793940183b016ec69ead09dfd547,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177695984538433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad,PodSandboxId:90dbf5cb53d63b1007b96ac2f15b3ab5addf7c91c47aee5e35979b408bdf7c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177695971674238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d,PodSandboxId:f9ff89031ae51ec6fa95a38321345b3aa2bc57bf8751c5088f062689405608ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721177684024521477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a,PodSandboxId:d5505d12e4eea6371d163a87a0a0fad1a36f4638a55faa0ec6bd8670095c9a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177682037117710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514,PodSandboxId:9ab38025fdf530e56aa514af0c177da6084a28291f77921d67bc21075d30978b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721177661075973367,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16,PodSandboxId:6ba4af0d3ccef8a42ebd9e065840321dbe05bbbfdb4264d02d4fb4560fe448fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177660981349473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6b
ab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.container.hash: a493abca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe,PodSandboxId:0537e60ae6cb6e20e256f53a4c96c5849b669bc05f3633b31dcd1dae06faa155,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177661017336925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83995
2580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e,PodSandboxId:7294fa65d3f2282a6e67ad2366363868614dd6aee47e40788668d22f29d60892,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177660961296009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map
[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bd43ce1-0a45-41b2-9ccc-709fa54f0b8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.733572244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=033021da-6e84-443d-a0fc-ce55e2633d2c name=/runtime.v1.RuntimeService/Version
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.733652110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=033021da-6e84-443d-a0fc-ce55e2633d2c name=/runtime.v1.RuntimeService/Version
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.735796075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97dea162-f877-4d86-bc04-393670ca937f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.736320070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178303736291794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97dea162-f877-4d86-bc04-393670ca937f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.737070744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd108e2b-f4ad-4fc5-8740-b0f87c9987ef name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.737145090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd108e2b-f4ad-4fc5-8740-b0f87c9987ef name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:05:03 multinode-905682 crio[2924]: time="2024-07-17 01:05:03.737810166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df6e185f483edcd114c5b5e1e069a749aa09fae6aea83a64ca5f00aa3aabe122,PodSandboxId:2c42a8a363a16c1561b4f191e85d2f9e4640c3dcdc85c1420c24fb4df0310f1a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721178097788168943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779,PodSandboxId:fa902dff8a01f1da5b3fde6de6ac65d4e90d9f8e3c0f911d9abe06cb4b7deb1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721178064307786299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c,PodSandboxId:0075107f7665f391900a232f69c36f579cb4c0a44ba25b65ef771987bfc97c63,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721178064255156031,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18
acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b,PodSandboxId:8b1ebf053e2c9f73ea821ef74c1a72efa09b99dec311f5d57cc9e350a6d9ca40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721178064115713526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]
string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b81e196cebfdae7bd4c4fea9f33fb3032641523486a3af91af989984dc20a83,PodSandboxId:cb53c056124ab490e1a5a107da3c7088a65775cd8e69e9d89689135aebfa2aa0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178064147281768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5,PodSandboxId:b1359287405ff1a0bcff6a64a87a47e6a90edb07d269f1806e95b7e5e23df21e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721178060352264994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6bab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a493abca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d,PodSandboxId:e42479eaa869c667fe11416b5f4f1c71cc7d94cc889e2931ca0d51f87edb600e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721178060273640525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838
e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64,PodSandboxId:0f09420e5dbdb83609705535eabbea00df36dbe358988ba512151614e6cefab3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721178060187402452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 839952580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677,PodSandboxId:ca608ede8b04ed625008072c417ba75de623976f4ffbac722578006ff6007dfb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721178060167184071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2a6fadd9efc4798dd9696ee44a8d4904525114a1b7f68c3f1eb84af01d321b0,PodSandboxId:6f750ea9aba5f5a09faeeb78de83406a0ca1c80f325c37d33e49abb36353ccd0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177743825240928,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l7kh7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3241a7b-8574-4523-a8c3-749622a7adc7,},Annotations:map[string]string{io.kubernetes.container.hash: dd56dd7f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40,PodSandboxId:80dbc679f84500871a825d3df2b7f343feea793940183b016ec69ead09dfd547,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177695984538433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsqqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8f4af1-9dd4-40d1-b3dd-d46d2e02a3e9,},Annotations:map[string]string{io.kubernetes.container.hash: d3925230,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bf51d1de7ff26c7c9aa552da3fe2ffe0724d7803469a79ad74bf4041f2d6ad,PodSandboxId:90dbf5cb53d63b1007b96ac2f15b3ab5addf7c91c47aee5e35979b408bdf7c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177695971674238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 08023a17-949f-4160-b54d-9239629fc0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7e0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d,PodSandboxId:f9ff89031ae51ec6fa95a38321345b3aa2bc57bf8751c5088f062689405608ce,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721177684024521477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qnxcz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 93b6c6dc-424a-4d24-aabc-6cf18acf53a9,},Annotations:map[string]string{io.kubernetes.container.hash: 77780a6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a,PodSandboxId:d5505d12e4eea6371d163a87a0a0fad1a36f4638a55faa0ec6bd8670095c9a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177682037117710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ml4v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 801b3f18-a89d-4cfe-ae0b-29d86546a71c,},Annotations:map[string]string{io.kubernetes.container.hash: 425d8c21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514,PodSandboxId:9ab38025fdf530e56aa514af0c177da6084a28291f77921d67bc21075d30978b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721177661075973367,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
6d2125628731aacad37666b6f9e1c70,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16,PodSandboxId:6ba4af0d3ccef8a42ebd9e065840321dbe05bbbfdb4264d02d4fb4560fe448fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177660981349473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c5b2d1f44198ce6b
ab2706d2749a8b4,},Annotations:map[string]string{io.kubernetes.container.hash: a493abca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe,PodSandboxId:0537e60ae6cb6e20e256f53a4c96c5849b669bc05f3633b31dcd1dae06faa155,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177661017336925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83995
2580c12b3bff1bd5eff119c7171,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e,PodSandboxId:7294fa65d3f2282a6e67ad2366363868614dd6aee47e40788668d22f29d60892,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177660961296009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-905682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6682739d140b831b9f69a284e347a7cf,},Annotations:map
[string]string{io.kubernetes.container.hash: ebcac552,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd108e2b-f4ad-4fc5-8740-b0f87c9987ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df6e185f483ed       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   2c42a8a363a16       busybox-fc5497c4f-l7kh7
	7c1aa9c30d3b3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   fa902dff8a01f       coredns-7db6d8ff4d-lsqqt
	efe0882e6b92d       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      3 minutes ago       Running             kindnet-cni               1                   0075107f7665f       kindnet-qnxcz
	9b81e196cebfd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   cb53c056124ab       storage-provisioner
	b9435d9f50926       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      3 minutes ago       Running             kube-proxy                1                   8b1ebf053e2c9       kube-proxy-ml4v5
	436a07b748e2d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            1                   b1359287405ff       kube-apiserver-multinode-905682
	6eae15fbe2335       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      4 minutes ago       Running             kube-scheduler            1                   e42479eaa869c       kube-scheduler-multinode-905682
	a6742398bcf0e       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   1                   0f09420e5dbdb       kube-controller-manager-multinode-905682
	97c2a2c7bd9c8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   ca608ede8b04e       etcd-multinode-905682
	d2a6fadd9efc4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   6f750ea9aba5f       busybox-fc5497c4f-l7kh7
	9ea48c9be6b6a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   80dbc679f8450       coredns-7db6d8ff4d-lsqqt
	c3bf51d1de7ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   90dbf5cb53d63       storage-provisioner
	b8197caff6893       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    10 minutes ago      Exited              kindnet-cni               0                   f9ff89031ae51       kindnet-qnxcz
	721df31d239ea       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      10 minutes ago      Exited              kube-proxy                0                   d5505d12e4eea       kube-proxy-ml4v5
	1f10eb4245589       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      10 minutes ago      Exited              kube-scheduler            0                   9ab38025fdf53       kube-scheduler-multinode-905682
	6d976dd7c9b9a       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      10 minutes ago      Exited              kube-controller-manager   0                   0537e60ae6cb6       kube-controller-manager-multinode-905682
	d8de5d5cf3c37       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      10 minutes ago      Exited              kube-apiserver            0                   6ba4af0d3ccef       kube-apiserver-multinode-905682
	aa6b9c507f3cc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   7294fa65d3f22       etcd-multinode-905682
	
	
	==> coredns [7c1aa9c30d3b3ddf3ec3a7a6ca5279181734e5ff502c1dce9aaa9a3d4af79779] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36122 - 64325 "HINFO IN 2811894640309302459.1035428133850246961. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012559467s
	
	
	==> coredns [9ea48c9be6b6aab523f49ae4081e9cfcde748e636bd3dcb60e6a2fdf565eec40] <==
	[INFO] 10.244.0.3:40210 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00164537s
	[INFO] 10.244.0.3:33091 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080103s
	[INFO] 10.244.0.3:44220 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068107s
	[INFO] 10.244.0.3:39279 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001164989s
	[INFO] 10.244.0.3:50172 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000053609s
	[INFO] 10.244.0.3:53946 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076675s
	[INFO] 10.244.0.3:33143 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060419s
	[INFO] 10.244.1.2:38772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119481s
	[INFO] 10.244.1.2:43591 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172508s
	[INFO] 10.244.1.2:44519 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087682s
	[INFO] 10.244.1.2:50162 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198369s
	[INFO] 10.244.0.3:53625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109023s
	[INFO] 10.244.0.3:59185 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086714s
	[INFO] 10.244.0.3:38795 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062683s
	[INFO] 10.244.0.3:44968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120421s
	[INFO] 10.244.1.2:34748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00034098s
	[INFO] 10.244.1.2:38329 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132405s
	[INFO] 10.244.1.2:41014 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157698s
	[INFO] 10.244.1.2:42947 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100469s
	[INFO] 10.244.0.3:41628 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142382s
	[INFO] 10.244.0.3:40676 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106177s
	[INFO] 10.244.0.3:44867 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080057s
	[INFO] 10.244.0.3:42798 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065885s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-905682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-905682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-905682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_54_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:54:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-905682
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:04:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:01:03 +0000   Wed, 17 Jul 2024 00:54:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:01:03 +0000   Wed, 17 Jul 2024 00:54:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:01:03 +0000   Wed, 17 Jul 2024 00:54:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:01:03 +0000   Wed, 17 Jul 2024 00:54:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    multinode-905682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06acdb00665d43b4841d6fcbb58dedca
	  System UUID:                06acdb00-665d-43b4-841d-6fcbb58dedca
	  Boot ID:                    f9a3be44-e3ca-44b5-8df1-402904ce325d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l7kh7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 coredns-7db6d8ff4d-lsqqt                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-905682                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-qnxcz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-905682             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-905682    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-ml4v5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-905682             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-905682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-905682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-905682 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-905682 event: Registered Node multinode-905682 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-905682 status is now: NodeReady
	  Normal  Starting                 4m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node multinode-905682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node multinode-905682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node multinode-905682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                node-controller  Node multinode-905682 event: Registered Node multinode-905682 in Controller
	
	
	Name:               multinode-905682-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-905682-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=multinode-905682
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T01_01_45_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:01:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-905682-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:02:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 01:02:15 +0000   Wed, 17 Jul 2024 01:03:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 01:02:15 +0000   Wed, 17 Jul 2024 01:03:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 01:02:15 +0000   Wed, 17 Jul 2024 01:03:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 01:02:15 +0000   Wed, 17 Jul 2024 01:03:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    multinode-905682-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 729fbc2b2fee4b2193c8df67ed8c3dad
	  System UUID:                729fbc2b-2fee-4b21-93c8-df67ed8c3dad
	  Boot ID:                    c9c693a2-7060-474a-91ca-a40287e077f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r7st6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kindnet-tjng8              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m42s
	  kube-system                 kube-proxy-6qxcv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  Starting                 9m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m42s (x2 over 9m42s)  kubelet          Node multinode-905682-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m42s (x2 over 9m42s)  kubelet          Node multinode-905682-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m42s (x2 over 9m42s)  kubelet          Node multinode-905682-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m24s                  kubelet          Node multinode-905682-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m20s (x2 over 3m20s)  kubelet          Node multinode-905682-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x2 over 3m20s)  kubelet          Node multinode-905682-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x2 over 3m20s)  kubelet          Node multinode-905682-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m3s                   kubelet          Node multinode-905682-m02 status is now: NodeReady
	  Normal  NodeNotReady             108s                   node-controller  Node multinode-905682-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058916] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.179533] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.112965] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.292893] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.088251] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +5.019323] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.062593] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.993169] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.072896] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.176047] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +0.116232] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.644315] kauditd_printk_skb: 60 callbacks suppressed
	[Jul17 00:55] kauditd_printk_skb: 12 callbacks suppressed
	[Jul17 01:00] systemd-fstab-generator[2841]: Ignoring "noauto" option for root device
	[  +0.140264] systemd-fstab-generator[2853]: Ignoring "noauto" option for root device
	[  +0.162012] systemd-fstab-generator[2867]: Ignoring "noauto" option for root device
	[  +0.154796] systemd-fstab-generator[2879]: Ignoring "noauto" option for root device
	[  +0.298338] systemd-fstab-generator[2908]: Ignoring "noauto" option for root device
	[  +2.314401] systemd-fstab-generator[3008]: Ignoring "noauto" option for root device
	[  +2.553502] systemd-fstab-generator[3131]: Ignoring "noauto" option for root device
	[  +0.078692] kauditd_printk_skb: 122 callbacks suppressed
	[Jul17 01:01] kauditd_printk_skb: 82 callbacks suppressed
	[ +11.843518] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.027250] systemd-fstab-generator[3959]: Ignoring "noauto" option for root device
	[ +17.456486] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [97c2a2c7bd9c8f60abe978569281bedfeb073a8aaaaded1ec5bf7db59556b677] <==
	{"level":"info","ts":"2024-07-17T01:01:00.674086Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:01:00.674148Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:01:00.680112Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:01:00.682721Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-07-17T01:01:00.68496Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-07-17T01:01:00.68688Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"74e924d55c832457","initial-advertise-peer-urls":["https://192.168.39.36:2380"],"listen-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:01:00.687147Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:01:01.977201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:01:01.97727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:01:01.97731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgPreVoteResp from 74e924d55c832457 at term 2"}
	{"level":"info","ts":"2024-07-17T01:01:01.977321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:01:01.977327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgVoteResp from 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-07-17T01:01:01.977338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:01:01.977347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74e924d55c832457 elected leader 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-07-17T01:01:01.981786Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"74e924d55c832457","local-member-attributes":"{Name:multinode-905682 ClientURLs:[https://192.168.39.36:2379]}","request-path":"/0/members/74e924d55c832457/attributes","cluster-id":"4bc1bccd4ea9d8cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:01:01.981845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:01:01.982339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:01:01.984471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:01:01.986141Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-07-17T01:01:01.988984Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:01:01.989017Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:02:24.367564Z","caller":"traceutil/trace.go:171","msg":"trace[637141115] linearizableReadLoop","detail":"{readStateIndex:1222; appliedIndex:1221; }","duration":"209.327662ms","start":"2024-07-17T01:02:24.158211Z","end":"2024-07-17T01:02:24.367539Z","steps":["trace[637141115] 'read index received'  (duration: 209.185311ms)","trace[637141115] 'applied index is now lower than readState.Index'  (duration: 142.021µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:02:24.367855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.575652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-905682-m03\" ","response":"range_response_count:1 size:3118"}
	{"level":"info","ts":"2024-07-17T01:02:24.367988Z","caller":"traceutil/trace.go:171","msg":"trace[397759538] range","detail":"{range_begin:/registry/minions/multinode-905682-m03; range_end:; response_count:1; response_revision:1110; }","duration":"209.790664ms","start":"2024-07-17T01:02:24.158188Z","end":"2024-07-17T01:02:24.367979Z","steps":["trace[397759538] 'agreement among raft nodes before linearized reading'  (duration: 209.521984ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:02:24.368058Z","caller":"traceutil/trace.go:171","msg":"trace[949569059] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"214.511087ms","start":"2024-07-17T01:02:24.153531Z","end":"2024-07-17T01:02:24.368042Z","steps":["trace[949569059] 'process raft request'  (duration: 213.906823ms)"],"step_count":1}
	
	
	==> etcd [aa6b9c507f3cc80930903149f3980344f2c9bcb1f0e3880d9b7e17768066952e] <==
	{"level":"info","ts":"2024-07-17T00:54:22.013126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T00:54:22.013136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74e924d55c832457 elected leader 74e924d55c832457 at term 2"}
	{"level":"info","ts":"2024-07-17T00:54:22.017204Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"74e924d55c832457","local-member-attributes":"{Name:multinode-905682 ClientURLs:[https://192.168.39.36:2379]}","request-path":"/0/members/74e924d55c832457/attributes","cluster-id":"4bc1bccd4ea9d8cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:54:22.017331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:54:22.018298Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:54:22.023453Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-07-17T00:54:22.017471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:54:22.019966Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:54:22.024981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:54:22.0288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:54:22.031078Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4bc1bccd4ea9d8cb","local-member-id":"74e924d55c832457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:54:22.031285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:54:22.031381Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:56:12.372017Z","caller":"traceutil/trace.go:171","msg":"trace[837375532] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"172.152329ms","start":"2024-07-17T00:56:12.199827Z","end":"2024-07-17T00:56:12.371979Z","steps":["trace[837375532] 'process raft request'  (duration: 108.811284ms)","trace[837375532] 'compare'  (duration: 63.171334ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:56:12.372339Z","caller":"traceutil/trace.go:171","msg":"trace[1065103114] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"155.437581ms","start":"2024-07-17T00:56:12.21689Z","end":"2024-07-17T00:56:12.372328Z","steps":["trace[1065103114] 'process raft request'  (duration: 155.202067ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:59:22.278077Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T00:59:22.278201Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-905682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	{"level":"warn","ts":"2024-07-17T00:59:22.27831Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:59:22.278351Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:59:22.278495Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:59:22.278562Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:59:22.325829Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"74e924d55c832457","current-leader-member-id":"74e924d55c832457"}
	{"level":"info","ts":"2024-07-17T00:59:22.332357Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-07-17T00:59:22.332588Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-07-17T00:59:22.33263Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-905682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	
	
	==> kernel <==
	 01:05:04 up 11 min,  0 users,  load average: 0.20, 0.56, 0.44
	Linux multinode-905682 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b8197caff6893ae9d9b3ee7e730bbca0d64adb21569d82498706623bcc7a902d] <==
	I0717 00:58:34.888786       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:58:44.880672       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 00:58:44.880746       1 main.go:303] handling current node
	I0717 00:58:44.880779       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 00:58:44.880786       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 00:58:44.881123       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 00:58:44.881153       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:58:54.883826       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 00:58:54.883892       1 main.go:303] handling current node
	I0717 00:58:54.883956       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 00:58:54.883967       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 00:58:54.884162       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 00:58:54.884189       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:59:04.887669       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 00:59:04.887773       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:59:04.887989       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 00:59:04.887999       1 main.go:303] handling current node
	I0717 00:59:04.888021       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 00:59:04.888025       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 00:59:14.881011       1 main.go:299] Handling node with IPs: map[192.168.39.142:{}]
	I0717 00:59:14.881070       1 main.go:326] Node multinode-905682-m03 has CIDR [10.244.3.0/24] 
	I0717 00:59:14.881227       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 00:59:14.881252       1 main.go:303] handling current node
	I0717 00:59:14.881264       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 00:59:14.881271       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [efe0882e6b92d6937fa81f77ea5183c441439f5a5a397ef45b6e629d342dd81c] <==
	I0717 01:03:55.280750       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:04:05.278519       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:04:05.278612       1 main.go:303] handling current node
	I0717 01:04:05.278634       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:04:05.278640       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:04:15.286366       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:04:15.286429       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:04:15.286618       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:04:15.286653       1 main.go:303] handling current node
	I0717 01:04:25.281128       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:04:25.281237       1 main.go:303] handling current node
	I0717 01:04:25.281267       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:04:25.281285       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:04:35.282366       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:04:35.282452       1 main.go:303] handling current node
	I0717 01:04:35.282499       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:04:35.282504       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:04:45.282794       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:04:45.282885       1 main.go:303] handling current node
	I0717 01:04:45.282958       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:04:45.282965       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	I0717 01:04:55.281479       1 main.go:299] Handling node with IPs: map[192.168.39.36:{}]
	I0717 01:04:55.281618       1 main.go:303] handling current node
	I0717 01:04:55.281650       1 main.go:299] Handling node with IPs: map[192.168.39.71:{}]
	I0717 01:04:55.281668       1 main.go:326] Node multinode-905682-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [436a07b748e2dfc1ac19af9dd6966cfcf47fe716502cfdb55f2d6958cfe929b5] <==
	I0717 01:01:03.278000       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0717 01:01:03.365828       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 01:01:03.367377       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:01:03.374373       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 01:01:03.378306       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:01:03.384869       1 aggregator.go:165] initial CRD sync complete...
	I0717 01:01:03.384955       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 01:01:03.384984       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:01:03.384989       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:01:03.386409       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:01:03.386522       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 01:01:03.386550       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 01:01:03.386470       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:01:03.410468       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 01:01:03.416065       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 01:01:03.416127       1 policy_source.go:224] refreshing policies
	I0717 01:01:03.472315       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:01:04.285437       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:01:05.558739       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:01:05.713363       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:01:05.740319       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:01:05.827315       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:01:05.835411       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:01:16.220193       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:01:16.271377       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d8de5d5cf3c37f46b6b44f2f75c1be7417e4d90611748e785decc9a51ac95f16] <==
	I0717 00:59:22.311889       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	W0717 00:59:22.311176       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0717 00:59:22.311194       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0717 00:59:22.311277       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	W0717 00:59:22.313601       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0717 00:59:22.311564       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0717 00:59:22.311596       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0717 00:59:22.311607       1 establishing_controller.go:87] Shutting down EstablishingController
	I0717 00:59:22.311624       1 naming_controller.go:302] Shutting down NamingConditionController
	I0717 00:59:22.311634       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0717 00:59:22.311647       1 controller.go:167] Shutting down OpenAPI controller
	I0717 00:59:22.311661       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0717 00:59:22.311671       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0717 00:59:22.311687       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0717 00:59:22.311701       1 available_controller.go:439] Shutting down AvailableConditionController
	I0717 00:59:22.311715       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0717 00:59:22.311722       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0717 00:59:22.311732       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0717 00:59:22.311740       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0717 00:59:22.311749       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0717 00:59:22.311772       1 controller.go:129] Ending legacy_token_tracking_controller
	I0717 00:59:22.314474       1 controller.go:130] Shutting down legacy_token_tracking_controller
	W0717 00:59:22.311836       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:59:22.314580       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:59:22.314654       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6d976dd7c9b9a25f252e392f642452cbf96b23e2eea5fb7a8f48af8b3d587bbe] <==
	I0717 00:55:22.829534       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m02\" does not exist"
	I0717 00:55:22.914696       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m02" podCIDRs=["10.244.1.0/24"]
	I0717 00:55:25.152230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-905682-m02"
	I0717 00:55:40.149735       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:55:42.515782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.605076ms"
	I0717 00:55:42.549993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.077312ms"
	I0717 00:55:42.550190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.208µs"
	I0717 00:55:42.550426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.17µs"
	I0717 00:55:44.470491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.892392ms"
	I0717 00:55:44.473036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="172.557µs"
	I0717 00:55:44.783458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.791422ms"
	I0717 00:55:44.784313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.428µs"
	I0717 00:56:12.375642       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m03\" does not exist"
	I0717 00:56:12.375766       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:56:12.431537       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m03" podCIDRs=["10.244.2.0/24"]
	I0717 00:56:15.364252       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-905682-m03"
	I0717 00:56:30.374110       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:56:58.526774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:56:59.826093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:56:59.826258       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m03\" does not exist"
	I0717 00:56:59.839303       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m03" podCIDRs=["10.244.3.0/24"]
	I0717 00:57:16.938325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 00:58:00.415333       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m03"
	I0717 00:58:00.470479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.98783ms"
	I0717 00:58:00.470600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.689µs"
	
	
	==> kube-controller-manager [a6742398bcf0e145b5c9d5bd3ee8f9a09aab4acee70075dacb8cef41bf0b2f64] <==
	I0717 01:01:44.694319       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m02\" does not exist"
	I0717 01:01:44.709766       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m02" podCIDRs=["10.244.1.0/24"]
	I0717 01:01:45.591663       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.578µs"
	I0717 01:01:45.645429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.729µs"
	I0717 01:01:45.657042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.816µs"
	I0717 01:01:45.665023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.412µs"
	I0717 01:01:45.668753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.97µs"
	I0717 01:01:46.356812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.803µs"
	I0717 01:02:01.004960       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 01:02:01.023802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.057µs"
	I0717 01:02:01.048517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.876µs"
	I0717 01:02:02.906186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.671109ms"
	I0717 01:02:02.906471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.316µs"
	I0717 01:02:19.081127       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 01:02:20.091266       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-905682-m03\" does not exist"
	I0717 01:02:20.091374       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 01:02:20.101649       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-905682-m03" podCIDRs=["10.244.2.0/24"]
	I0717 01:02:37.425727       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 01:02:42.750875       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-905682-m02"
	I0717 01:03:16.373499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.606568ms"
	I0717 01:03:16.373706       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.697µs"
	I0717 01:03:36.204863       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8jr6z"
	I0717 01:03:36.234972       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8jr6z"
	I0717 01:03:36.235067       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-6gwfw"
	I0717 01:03:36.259445       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-6gwfw"
	
	
	==> kube-proxy [721df31d239ea7dae3228ae02de548876aeff5be62b1ac62eb0253859fac735a] <==
	I0717 00:54:42.206025       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:54:42.219398       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	I0717 00:54:42.265593       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:54:42.265741       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:54:42.265777       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:54:42.268470       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:54:42.268688       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:54:42.268862       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:54:42.270250       1 config.go:192] "Starting service config controller"
	I0717 00:54:42.270504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:54:42.270635       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:54:42.270707       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:54:42.271392       1 config.go:319] "Starting node config controller"
	I0717 00:54:42.272262       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:54:42.371282       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:54:42.371282       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:54:42.372713       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b9435d9f50926b9082e0fe944b074713267772411923e8525accce36e3a19a1b] <==
	I0717 01:01:04.373362       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:01:04.408813       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	I0717 01:01:04.471073       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:01:04.471176       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:01:04.471194       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:01:04.477678       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:01:04.478030       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:01:04.480158       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:01:04.481420       1 config.go:192] "Starting service config controller"
	I0717 01:01:04.481510       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:01:04.481610       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:01:04.481639       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:01:04.482698       1 config.go:319] "Starting node config controller"
	I0717 01:01:04.482707       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:01:04.582429       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:01:04.582554       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:01:04.583053       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1f10eb4245589e1d2fac61c3a3c30e449a89163604eb1e33361c01caefe47514] <==
	E0717 00:54:23.638659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:54:23.638696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:54:23.638723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:54:23.638794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:54:23.638822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:54:23.641255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:23.641398       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:23.641260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:54:23.641534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:54:24.629550       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:54:24.629601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:54:24.664130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:54:24.664579       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:54:24.669237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:54:24.669277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:54:24.744414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:54:24.744554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:54:24.820763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:24.821278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:24.829122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:24.829189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:24.939417       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:54:24.939530       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:54:26.933405       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 00:59:22.281673       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6eae15fbe2335cb28bf1bdfe2a4ae0fb76137c57ea797170a221bce21a335c9d] <==
	I0717 01:01:01.792156       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:01:03.320101       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:01:03.320236       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:01:03.320270       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:01:03.320356       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:01:03.384248       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:01:03.387395       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:01:03.389558       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:01:03.389784       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:01:03.389848       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:01:03.389957       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:01:03.489982       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545704    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08023a17-949f-4160-b54d-9239629fc0cb-tmp\") pod \"storage-provisioner\" (UID: \"08023a17-949f-4160-b54d-9239629fc0cb\") " pod="kube-system/storage-provisioner"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545759    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/801b3f18-a89d-4cfe-ae0b-29d86546a71c-xtables-lock\") pod \"kube-proxy-ml4v5\" (UID: \"801b3f18-a89d-4cfe-ae0b-29d86546a71c\") " pod="kube-system/kube-proxy-ml4v5"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545799    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/801b3f18-a89d-4cfe-ae0b-29d86546a71c-lib-modules\") pod \"kube-proxy-ml4v5\" (UID: \"801b3f18-a89d-4cfe-ae0b-29d86546a71c\") " pod="kube-system/kube-proxy-ml4v5"
	Jul 17 01:01:03 multinode-905682 kubelet[3138]: I0717 01:01:03.545852    3138 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/93b6c6dc-424a-4d24-aabc-6cf18acf53a9-cni-cfg\") pod \"kindnet-qnxcz\" (UID: \"93b6c6dc-424a-4d24-aabc-6cf18acf53a9\") " pod="kube-system/kindnet-qnxcz"
	Jul 17 01:01:09 multinode-905682 kubelet[3138]: I0717 01:01:09.267716    3138 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 01:01:59 multinode-905682 kubelet[3138]: E0717 01:01:59.573501    3138 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:01:59 multinode-905682 kubelet[3138]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:01:59 multinode-905682 kubelet[3138]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:01:59 multinode-905682 kubelet[3138]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:01:59 multinode-905682 kubelet[3138]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:02:59 multinode-905682 kubelet[3138]: E0717 01:02:59.577574    3138 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:02:59 multinode-905682 kubelet[3138]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:02:59 multinode-905682 kubelet[3138]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:02:59 multinode-905682 kubelet[3138]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:02:59 multinode-905682 kubelet[3138]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:03:59 multinode-905682 kubelet[3138]: E0717 01:03:59.573122    3138 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:03:59 multinode-905682 kubelet[3138]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:03:59 multinode-905682 kubelet[3138]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:03:59 multinode-905682 kubelet[3138]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:03:59 multinode-905682 kubelet[3138]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:04:59 multinode-905682 kubelet[3138]: E0717 01:04:59.582093    3138 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:04:59 multinode-905682 kubelet[3138]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:04:59 multinode-905682 kubelet[3138]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:04:59 multinode-905682 kubelet[3138]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:04:59 multinode-905682 kubelet[3138]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:05:03.319051   51807 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19265-12897/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-905682 -n multinode-905682
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-905682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.30s)

                                                
                                    
x
+
TestPreload (163.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-625427 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 01:09:18.740780   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-625427 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.355565657s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-625427 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-625427 image pull gcr.io/k8s-minikube/busybox: (1.037625538s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-625427
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-625427: (7.286148385s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-625427 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-625427 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.605014168s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-625427 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-17 01:11:36.268861505 +0000 UTC m=+4027.073007326
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-625427 -n test-preload-625427
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-625427 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-625427 logs -n 25: (1.081445416s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682 sudo cat                                       | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m03_multinode-905682.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt                       | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m02:/home/docker/cp-test_multinode-905682-m03_multinode-905682-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n                                                                 | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | multinode-905682-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-905682 ssh -n multinode-905682-m02 sudo cat                                   | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_multinode-905682-m03_multinode-905682-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-905682 node stop m03                                                          | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:56 UTC |
	| node    | multinode-905682 node start                                                             | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:56 UTC | 17 Jul 24 00:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-905682                                                                | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:57 UTC |                     |
	| stop    | -p multinode-905682                                                                     | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:57 UTC |                     |
	| start   | -p multinode-905682                                                                     | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 00:59 UTC | 17 Jul 24 01:02 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-905682                                                                | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 01:02 UTC |                     |
	| node    | multinode-905682 node delete                                                            | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 01:02 UTC | 17 Jul 24 01:02 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-905682 stop                                                                   | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 01:02 UTC |                     |
	| start   | -p multinode-905682                                                                     | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 01:05 UTC | 17 Jul 24 01:08 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-905682                                                                | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 01:08 UTC |                     |
	| start   | -p multinode-905682-m02                                                                 | multinode-905682-m02 | jenkins | v1.33.1 | 17 Jul 24 01:08 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-905682-m03                                                                 | multinode-905682-m03 | jenkins | v1.33.1 | 17 Jul 24 01:08 UTC | 17 Jul 24 01:08 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-905682                                                                 | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 01:08 UTC |                     |
	| delete  | -p multinode-905682-m03                                                                 | multinode-905682-m03 | jenkins | v1.33.1 | 17 Jul 24 01:08 UTC | 17 Jul 24 01:08 UTC |
	| delete  | -p multinode-905682                                                                     | multinode-905682     | jenkins | v1.33.1 | 17 Jul 24 01:08 UTC | 17 Jul 24 01:08 UTC |
	| start   | -p test-preload-625427                                                                  | test-preload-625427  | jenkins | v1.33.1 | 17 Jul 24 01:08 UTC | 17 Jul 24 01:10 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-625427 image pull                                                          | test-preload-625427  | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | 17 Jul 24 01:10 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-625427                                                                  | test-preload-625427  | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | 17 Jul 24 01:10 UTC |
	| start   | -p test-preload-625427                                                                  | test-preload-625427  | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | 17 Jul 24 01:11 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-625427 image list                                                          | test-preload-625427  | jenkins | v1.33.1 | 17 Jul 24 01:11 UTC | 17 Jul 24 01:11 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:10:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:10:35.499683   54218 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:10:35.499805   54218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:10:35.499814   54218 out.go:304] Setting ErrFile to fd 2...
	I0717 01:10:35.499820   54218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:10:35.499998   54218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:10:35.500514   54218 out.go:298] Setting JSON to false
	I0717 01:10:35.501360   54218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6784,"bootTime":1721171851,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:10:35.501410   54218 start.go:139] virtualization: kvm guest
	I0717 01:10:35.503455   54218 out.go:177] * [test-preload-625427] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:10:35.504795   54218 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:10:35.504802   54218 notify.go:220] Checking for updates...
	I0717 01:10:35.507105   54218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:10:35.508354   54218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:10:35.509487   54218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:10:35.510595   54218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:10:35.511813   54218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:10:35.513249   54218 config.go:182] Loaded profile config "test-preload-625427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0717 01:10:35.513682   54218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:10:35.513729   54218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:10:35.527966   54218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 01:10:35.528368   54218 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:10:35.528908   54218 main.go:141] libmachine: Using API Version  1
	I0717 01:10:35.528936   54218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:10:35.529262   54218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:10:35.529415   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:35.531037   54218 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:10:35.532060   54218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:10:35.532335   54218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:10:35.532366   54218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:10:35.546358   54218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
	I0717 01:10:35.546777   54218 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:10:35.547278   54218 main.go:141] libmachine: Using API Version  1
	I0717 01:10:35.547299   54218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:10:35.547564   54218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:10:35.547746   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:35.581979   54218 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:10:35.583194   54218 start.go:297] selected driver: kvm2
	I0717 01:10:35.583214   54218 start.go:901] validating driver "kvm2" against &{Name:test-preload-625427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-625427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:10:35.583305   54218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:10:35.584007   54218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:10:35.584071   54218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:10:35.598476   54218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:10:35.598767   54218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:10:35.598829   54218 cni.go:84] Creating CNI manager for ""
	I0717 01:10:35.598841   54218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:10:35.598890   54218 start.go:340] cluster config:
	{Name:test-preload-625427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-625427 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:10:35.598977   54218 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:10:35.601334   54218 out.go:177] * Starting "test-preload-625427" primary control-plane node in "test-preload-625427" cluster
	I0717 01:10:35.602423   54218 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0717 01:10:35.627436   54218 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0717 01:10:35.627468   54218 cache.go:56] Caching tarball of preloaded images
	I0717 01:10:35.627595   54218 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0717 01:10:35.629039   54218 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0717 01:10:35.629993   54218 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0717 01:10:35.653615   54218 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0717 01:10:39.248027   54218 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0717 01:10:39.248129   54218 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0717 01:10:40.087003   54218 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0717 01:10:40.087123   54218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/config.json ...
	I0717 01:10:40.087356   54218 start.go:360] acquireMachinesLock for test-preload-625427: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:10:40.087415   54218 start.go:364] duration metric: took 37.474µs to acquireMachinesLock for "test-preload-625427"
	I0717 01:10:40.087430   54218 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:10:40.087438   54218 fix.go:54] fixHost starting: 
	I0717 01:10:40.087763   54218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:10:40.087797   54218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:10:40.102332   54218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34757
	I0717 01:10:40.102794   54218 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:10:40.103254   54218 main.go:141] libmachine: Using API Version  1
	I0717 01:10:40.103278   54218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:10:40.103582   54218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:10:40.103833   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:40.104013   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetState
	I0717 01:10:40.105649   54218 fix.go:112] recreateIfNeeded on test-preload-625427: state=Stopped err=<nil>
	I0717 01:10:40.105688   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	W0717 01:10:40.105839   54218 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:10:40.108026   54218 out.go:177] * Restarting existing kvm2 VM for "test-preload-625427" ...
	I0717 01:10:40.109296   54218 main.go:141] libmachine: (test-preload-625427) Calling .Start
	I0717 01:10:40.109433   54218 main.go:141] libmachine: (test-preload-625427) Ensuring networks are active...
	I0717 01:10:40.110161   54218 main.go:141] libmachine: (test-preload-625427) Ensuring network default is active
	I0717 01:10:40.110476   54218 main.go:141] libmachine: (test-preload-625427) Ensuring network mk-test-preload-625427 is active
	I0717 01:10:40.110797   54218 main.go:141] libmachine: (test-preload-625427) Getting domain xml...
	I0717 01:10:40.111545   54218 main.go:141] libmachine: (test-preload-625427) Creating domain...
	I0717 01:10:41.290988   54218 main.go:141] libmachine: (test-preload-625427) Waiting to get IP...
	I0717 01:10:41.291877   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:41.292193   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:41.292264   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:41.292184   54269 retry.go:31] will retry after 240.036012ms: waiting for machine to come up
	I0717 01:10:41.533594   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:41.534001   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:41.534027   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:41.533953   54269 retry.go:31] will retry after 269.908413ms: waiting for machine to come up
	I0717 01:10:41.805427   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:41.805896   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:41.805920   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:41.805847   54269 retry.go:31] will retry after 320.335892ms: waiting for machine to come up
	I0717 01:10:42.127205   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:42.127649   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:42.127677   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:42.127595   54269 retry.go:31] will retry after 396.080154ms: waiting for machine to come up
	I0717 01:10:42.525254   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:42.525657   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:42.525678   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:42.525620   54269 retry.go:31] will retry after 696.078157ms: waiting for machine to come up
	I0717 01:10:43.223042   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:43.223501   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:43.223525   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:43.223474   54269 retry.go:31] will retry after 949.100927ms: waiting for machine to come up
	I0717 01:10:44.173676   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:44.174019   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:44.174049   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:44.173995   54269 retry.go:31] will retry after 987.423351ms: waiting for machine to come up
	I0717 01:10:45.163006   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:45.163328   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:45.163352   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:45.163284   54269 retry.go:31] will retry after 1.012518053s: waiting for machine to come up
	I0717 01:10:46.176923   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:46.177282   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:46.177309   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:46.177233   54269 retry.go:31] will retry after 1.636170837s: waiting for machine to come up
	I0717 01:10:47.815941   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:47.816338   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:47.816366   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:47.816286   54269 retry.go:31] will retry after 1.50743759s: waiting for machine to come up
	I0717 01:10:49.325678   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:49.326146   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:49.326168   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:49.326104   54269 retry.go:31] will retry after 2.118387826s: waiting for machine to come up
	I0717 01:10:51.445595   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:51.446024   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:51.446057   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:51.445967   54269 retry.go:31] will retry after 2.297225203s: waiting for machine to come up
	I0717 01:10:53.746335   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:53.746778   54218 main.go:141] libmachine: (test-preload-625427) DBG | unable to find current IP address of domain test-preload-625427 in network mk-test-preload-625427
	I0717 01:10:53.746804   54218 main.go:141] libmachine: (test-preload-625427) DBG | I0717 01:10:53.746737   54269 retry.go:31] will retry after 4.396464573s: waiting for machine to come up
	I0717 01:10:58.146313   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.146848   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has current primary IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.146867   54218 main.go:141] libmachine: (test-preload-625427) Found IP for machine: 192.168.39.182
	I0717 01:10:58.146880   54218 main.go:141] libmachine: (test-preload-625427) Reserving static IP address...
	I0717 01:10:58.147365   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "test-preload-625427", mac: "52:54:00:80:e9:23", ip: "192.168.39.182"} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.147388   54218 main.go:141] libmachine: (test-preload-625427) Reserved static IP address: 192.168.39.182
	I0717 01:10:58.147402   54218 main.go:141] libmachine: (test-preload-625427) DBG | skip adding static IP to network mk-test-preload-625427 - found existing host DHCP lease matching {name: "test-preload-625427", mac: "52:54:00:80:e9:23", ip: "192.168.39.182"}
	I0717 01:10:58.147418   54218 main.go:141] libmachine: (test-preload-625427) DBG | Getting to WaitForSSH function...
	I0717 01:10:58.147444   54218 main.go:141] libmachine: (test-preload-625427) Waiting for SSH to be available...
	I0717 01:10:58.149409   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.149671   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.149709   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.149871   54218 main.go:141] libmachine: (test-preload-625427) DBG | Using SSH client type: external
	I0717 01:10:58.149897   54218 main.go:141] libmachine: (test-preload-625427) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/test-preload-625427/id_rsa (-rw-------)
	I0717 01:10:58.149927   54218 main.go:141] libmachine: (test-preload-625427) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/test-preload-625427/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:10:58.149942   54218 main.go:141] libmachine: (test-preload-625427) DBG | About to run SSH command:
	I0717 01:10:58.149957   54218 main.go:141] libmachine: (test-preload-625427) DBG | exit 0
	I0717 01:10:58.272327   54218 main.go:141] libmachine: (test-preload-625427) DBG | SSH cmd err, output: <nil>: 
	I0717 01:10:58.272643   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetConfigRaw
	I0717 01:10:58.273253   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetIP
	I0717 01:10:58.275823   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.276146   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.276172   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.276344   54218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/config.json ...
	I0717 01:10:58.276652   54218 machine.go:94] provisionDockerMachine start ...
	I0717 01:10:58.276676   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:58.276873   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:58.278894   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.279182   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.279207   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.279317   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:58.279491   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:58.279632   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:58.279747   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:58.279933   54218 main.go:141] libmachine: Using SSH client type: native
	I0717 01:10:58.280098   54218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0717 01:10:58.280109   54218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:10:58.380944   54218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:10:58.380967   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetMachineName
	I0717 01:10:58.381192   54218 buildroot.go:166] provisioning hostname "test-preload-625427"
	I0717 01:10:58.381216   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetMachineName
	I0717 01:10:58.381433   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:58.384240   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.384623   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.384652   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.384810   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:58.384998   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:58.385150   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:58.385302   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:58.385449   54218 main.go:141] libmachine: Using SSH client type: native
	I0717 01:10:58.385614   54218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0717 01:10:58.385627   54218 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-625427 && echo "test-preload-625427" | sudo tee /etc/hostname
	I0717 01:10:58.498574   54218 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-625427
	
	I0717 01:10:58.498605   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:58.501302   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.501610   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.501639   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.501775   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:58.501969   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:58.502120   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:58.502242   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:58.502397   54218 main.go:141] libmachine: Using SSH client type: native
	I0717 01:10:58.502603   54218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0717 01:10:58.502630   54218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-625427' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-625427/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-625427' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:10:58.609291   54218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:10:58.609325   54218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:10:58.609346   54218 buildroot.go:174] setting up certificates
	I0717 01:10:58.609355   54218 provision.go:84] configureAuth start
	I0717 01:10:58.609363   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetMachineName
	I0717 01:10:58.609648   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetIP
	I0717 01:10:58.612131   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.612457   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.612492   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.612635   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:58.614606   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.614880   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.614904   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.615005   54218 provision.go:143] copyHostCerts
	I0717 01:10:58.615067   54218 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:10:58.615088   54218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:10:58.615160   54218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:10:58.615282   54218 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:10:58.615293   54218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:10:58.615333   54218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:10:58.615409   54218 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:10:58.615421   54218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:10:58.615454   54218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:10:58.615523   54218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.test-preload-625427 san=[127.0.0.1 192.168.39.182 localhost minikube test-preload-625427]
	I0717 01:10:58.854952   54218 provision.go:177] copyRemoteCerts
	I0717 01:10:58.855010   54218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:10:58.855035   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:58.857536   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.857812   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:58.857849   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:58.858056   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:58.858257   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:58.858409   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:58.858549   54218 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/test-preload-625427/id_rsa Username:docker}
	I0717 01:10:58.938956   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:10:58.963582   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:10:58.987563   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 01:10:59.010854   54218 provision.go:87] duration metric: took 401.488518ms to configureAuth
	I0717 01:10:59.010888   54218 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:10:59.011079   54218 config.go:182] Loaded profile config "test-preload-625427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0717 01:10:59.011172   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:59.013784   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.014115   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:59.014159   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.014277   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:59.014469   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:59.014721   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:59.014892   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:59.015055   54218 main.go:141] libmachine: Using SSH client type: native
	I0717 01:10:59.015253   54218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0717 01:10:59.015268   54218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:10:59.267570   54218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:10:59.267596   54218 machine.go:97] duration metric: took 990.928033ms to provisionDockerMachine
	I0717 01:10:59.267607   54218 start.go:293] postStartSetup for "test-preload-625427" (driver="kvm2")
	I0717 01:10:59.267616   54218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:10:59.267630   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:59.267938   54218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:10:59.267978   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:59.270500   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.270810   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:59.270855   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.271015   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:59.271193   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:59.271331   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:59.271478   54218 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/test-preload-625427/id_rsa Username:docker}
	I0717 01:10:59.350894   54218 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:10:59.355242   54218 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:10:59.355265   54218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:10:59.355345   54218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:10:59.355425   54218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:10:59.355625   54218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:10:59.364959   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:10:59.388412   54218 start.go:296] duration metric: took 120.794028ms for postStartSetup
	I0717 01:10:59.388451   54218 fix.go:56] duration metric: took 19.301012142s for fixHost
	I0717 01:10:59.388477   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:59.390755   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.391017   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:59.391046   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.391185   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:59.391371   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:59.391523   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:59.391623   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:59.391753   54218 main.go:141] libmachine: Using SSH client type: native
	I0717 01:10:59.391907   54218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0717 01:10:59.391917   54218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:10:59.493491   54218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721178659.469135552
	
	I0717 01:10:59.493528   54218 fix.go:216] guest clock: 1721178659.469135552
	I0717 01:10:59.493537   54218 fix.go:229] Guest: 2024-07-17 01:10:59.469135552 +0000 UTC Remote: 2024-07-17 01:10:59.38845425 +0000 UTC m=+23.922144512 (delta=80.681302ms)
	I0717 01:10:59.493572   54218 fix.go:200] guest clock delta is within tolerance: 80.681302ms
	I0717 01:10:59.493578   54218 start.go:83] releasing machines lock for "test-preload-625427", held for 19.406153209s
	I0717 01:10:59.493597   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:59.493845   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetIP
	I0717 01:10:59.496241   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.496543   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:59.496610   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.496696   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:59.497133   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:59.497313   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:10:59.497409   54218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:10:59.497454   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:59.497515   54218 ssh_runner.go:195] Run: cat /version.json
	I0717 01:10:59.497538   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:10:59.499785   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.500059   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:59.500084   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.500127   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.500241   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:59.500400   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:59.500454   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:10:59.500478   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:10:59.500543   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:59.500679   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:10:59.500764   54218 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/test-preload-625427/id_rsa Username:docker}
	I0717 01:10:59.500841   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:10:59.500992   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:10:59.501138   54218 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/test-preload-625427/id_rsa Username:docker}
	I0717 01:10:59.601391   54218 ssh_runner.go:195] Run: systemctl --version
	I0717 01:10:59.607974   54218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:10:59.752478   54218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:10:59.759340   54218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:10:59.759420   54218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:10:59.776225   54218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:10:59.776244   54218 start.go:495] detecting cgroup driver to use...
	I0717 01:10:59.776298   54218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:10:59.793051   54218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:10:59.806285   54218 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:10:59.806440   54218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:10:59.820208   54218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:10:59.832946   54218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:10:59.945240   54218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:11:00.085226   54218 docker.go:233] disabling docker service ...
	I0717 01:11:00.085294   54218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:11:00.100304   54218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:11:00.114059   54218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:11:00.246788   54218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:11:00.355547   54218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:11:00.369155   54218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:11:00.387407   54218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0717 01:11:00.387465   54218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:11:00.398673   54218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:11:00.398743   54218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:11:00.408928   54218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:11:00.418723   54218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:11:00.428644   54218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:11:00.439262   54218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:11:00.449226   54218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:11:00.465975   54218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:11:00.475579   54218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:11:00.484231   54218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:11:00.484286   54218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:11:00.497227   54218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:11:00.506747   54218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:11:00.630751   54218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:11:00.763315   54218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:11:00.763393   54218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:11:00.768405   54218 start.go:563] Will wait 60s for crictl version
	I0717 01:11:00.768459   54218 ssh_runner.go:195] Run: which crictl
	I0717 01:11:00.772114   54218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:11:00.811894   54218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:11:00.811962   54218 ssh_runner.go:195] Run: crio --version
	I0717 01:11:00.839706   54218 ssh_runner.go:195] Run: crio --version
	I0717 01:11:00.868735   54218 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0717 01:11:00.870287   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetIP
	I0717 01:11:00.872870   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:11:00.873234   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:11:00.873257   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:11:00.873457   54218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:11:00.877442   54218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:11:00.889716   54218 kubeadm.go:883] updating cluster {Name:test-preload-625427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-625427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:11:00.889820   54218 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0717 01:11:00.889856   54218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:11:00.926499   54218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0717 01:11:00.926556   54218 ssh_runner.go:195] Run: which lz4
	I0717 01:11:00.930505   54218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:11:00.934577   54218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:11:00.934602   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0717 01:11:02.503233   54218 crio.go:462] duration metric: took 1.572753564s to copy over tarball
	I0717 01:11:02.503293   54218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:11:04.871943   54218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.36861963s)
	I0717 01:11:04.871981   54218 crio.go:469] duration metric: took 2.368718828s to extract the tarball
	I0717 01:11:04.871990   54218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:11:04.912992   54218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:11:04.955451   54218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0717 01:11:04.955478   54218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:11:04.955538   54218 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:11:04.955575   54218 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:11:04.955588   54218 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:11:04.955595   54218 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 01:11:04.955555   54218 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:11:04.955662   54218 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:11:04.955652   54218 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 01:11:04.955646   54218 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:11:04.957178   54218 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 01:11:04.957191   54218 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:11:04.957197   54218 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:11:04.957180   54218 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:11:04.957184   54218 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 01:11:04.957257   54218 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:11:04.957226   54218 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:11:04.957510   54218 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:11:05.103653   54218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0717 01:11:05.107330   54218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:11:05.108387   54218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:11:05.112790   54218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0717 01:11:05.113484   54218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:11:05.134223   54218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:11:05.152592   54218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:11:05.220792   54218 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0717 01:11:05.220835   54218 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0717 01:11:05.220886   54218 ssh_runner.go:195] Run: which crictl
	I0717 01:11:05.236399   54218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0717 01:11:05.236431   54218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:11:05.236477   54218 ssh_runner.go:195] Run: which crictl
	I0717 01:11:05.249093   54218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:11:05.249484   54218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0717 01:11:05.249524   54218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:11:05.249566   54218 ssh_runner.go:195] Run: which crictl
	I0717 01:11:05.263924   54218 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0717 01:11:05.263967   54218 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0717 01:11:05.264012   54218 ssh_runner.go:195] Run: which crictl
	I0717 01:11:05.285285   54218 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0717 01:11:05.285337   54218 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:11:05.285406   54218 ssh_runner.go:195] Run: which crictl
	I0717 01:11:05.302608   54218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0717 01:11:05.302648   54218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:11:05.302693   54218 ssh_runner.go:195] Run: which crictl
	I0717 01:11:05.324759   54218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0717 01:11:05.324804   54218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:11:05.324809   54218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0717 01:11:05.324842   54218 ssh_runner.go:195] Run: which crictl
	I0717 01:11:05.324893   54218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:11:05.426317   54218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:11:05.426404   54218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0717 01:11:05.426499   54218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:11:05.426504   54218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:11:05.426575   54218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0717 01:11:05.426628   54218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:11:05.426652   54218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0717 01:11:05.426676   54218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0717 01:11:05.426763   54218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0717 01:11:05.523842   54218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0717 01:11:05.523973   54218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0717 01:11:05.546899   54218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0717 01:11:05.546914   54218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0717 01:11:05.546931   54218 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0717 01:11:05.546979   54218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0717 01:11:05.546992   54218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0717 01:11:05.546997   54218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0717 01:11:05.547073   54218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0717 01:11:05.547119   54218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 01:11:05.547154   54218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0717 01:11:05.547179   54218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0717 01:11:05.547242   54218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0717 01:11:05.547319   54218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0717 01:11:05.549150   54218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0717 01:11:09.927899   54218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (4.380690605s)
	I0717 01:11:09.927945   54218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0717 01:11:09.927960   54218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (4.380937468s)
	I0717 01:11:09.927980   54218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (4.380984502s)
	I0717 01:11:09.927987   54218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0717 01:11:09.927989   54218 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0717 01:11:09.928007   54218 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0717 01:11:09.928028   54218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (4.380854154s)
	I0717 01:11:09.928048   54218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0717 01:11:09.928072   54218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0717 01:11:09.928082   54218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (4.380750079s)
	I0717 01:11:09.928091   54218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0717 01:11:10.076546   54218 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0717 01:11:10.076614   54218 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0717 01:11:10.076661   54218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0717 01:11:10.521057   54218 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0717 01:11:10.521103   54218 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0717 01:11:10.521157   54218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0717 01:11:10.861382   54218 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0717 01:11:10.861428   54218 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0717 01:11:10.861488   54218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0717 01:11:13.010665   54218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.149148251s)
	I0717 01:11:13.010694   54218 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0717 01:11:13.010725   54218 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0717 01:11:13.010772   54218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0717 01:11:13.860615   54218 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0717 01:11:13.860661   54218 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0717 01:11:13.860701   54218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0717 01:11:14.601129   54218 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0717 01:11:14.601183   54218 cache_images.go:123] Successfully loaded all cached images
	I0717 01:11:14.601191   54218 cache_images.go:92] duration metric: took 9.645698782s to LoadCachedImages
	I0717 01:11:14.601205   54218 kubeadm.go:934] updating node { 192.168.39.182 8443 v1.24.4 crio true true} ...
	I0717 01:11:14.601343   54218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-625427 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-625427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:11:14.601438   54218 ssh_runner.go:195] Run: crio config
	I0717 01:11:14.647409   54218 cni.go:84] Creating CNI manager for ""
	I0717 01:11:14.647430   54218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:11:14.647442   54218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:11:14.647458   54218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-625427 NodeName:test-preload-625427 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:11:14.647588   54218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-625427"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:11:14.647650   54218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0717 01:11:14.657295   54218 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:11:14.657350   54218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:11:14.666370   54218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0717 01:11:14.684321   54218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:11:14.702383   54218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0717 01:11:14.721170   54218 ssh_runner.go:195] Run: grep 192.168.39.182	control-plane.minikube.internal$ /etc/hosts
	I0717 01:11:14.725155   54218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:11:14.737378   54218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:11:14.865491   54218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:11:14.882348   54218 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427 for IP: 192.168.39.182
	I0717 01:11:14.882370   54218 certs.go:194] generating shared ca certs ...
	I0717 01:11:14.882389   54218 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:11:14.882571   54218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:11:14.882629   54218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:11:14.882644   54218 certs.go:256] generating profile certs ...
	I0717 01:11:14.882742   54218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/client.key
	I0717 01:11:14.882812   54218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/apiserver.key.d0283e9c
	I0717 01:11:14.882869   54218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/proxy-client.key
	I0717 01:11:14.883010   54218 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:11:14.883049   54218 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:11:14.883067   54218 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:11:14.883091   54218 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:11:14.883123   54218 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:11:14.883143   54218 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:11:14.883198   54218 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:11:14.884048   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:11:14.923662   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:11:14.960814   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:11:15.001459   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:11:15.042997   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:11:15.084245   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:11:15.107608   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:11:15.130423   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:11:15.152583   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:11:15.175103   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:11:15.198206   54218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:11:15.220508   54218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:11:15.236775   54218 ssh_runner.go:195] Run: openssl version
	I0717 01:11:15.242490   54218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:11:15.253326   54218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:11:15.257871   54218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:11:15.257927   54218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:11:15.263617   54218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:11:15.274493   54218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:11:15.285498   54218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:11:15.289925   54218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:11:15.289980   54218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:11:15.295403   54218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:11:15.305924   54218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:11:15.316526   54218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:11:15.320909   54218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:11:15.320953   54218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:11:15.326478   54218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:11:15.337459   54218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:11:15.341881   54218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:11:15.347450   54218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:11:15.353281   54218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:11:15.359028   54218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:11:15.364571   54218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:11:15.369989   54218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:11:15.375468   54218 kubeadm.go:392] StartCluster: {Name:test-preload-625427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-625427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:11:15.375546   54218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:11:15.375615   54218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:11:15.417113   54218 cri.go:89] found id: ""
	I0717 01:11:15.417191   54218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:11:15.427516   54218 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:11:15.427535   54218 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:11:15.427585   54218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:11:15.438816   54218 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:11:15.439229   54218 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-625427" does not appear in /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:11:15.439343   54218 kubeconfig.go:62] /home/jenkins/minikube-integration/19265-12897/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-625427" cluster setting kubeconfig missing "test-preload-625427" context setting]
	I0717 01:11:15.439612   54218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:11:15.440186   54218 kapi.go:59] client config for test-preload-625427: &rest.Config{Host:"https://192.168.39.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 01:11:15.440799   54218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:11:15.451540   54218 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.182
	I0717 01:11:15.451569   54218 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:11:15.451579   54218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:11:15.451628   54218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:11:15.492326   54218 cri.go:89] found id: ""
	I0717 01:11:15.492393   54218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:11:15.510910   54218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:11:15.521169   54218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:11:15.521189   54218 kubeadm.go:157] found existing configuration files:
	
	I0717 01:11:15.521235   54218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:11:15.530471   54218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:11:15.530537   54218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:11:15.540420   54218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:11:15.549896   54218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:11:15.549950   54218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:11:15.559494   54218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:11:15.568328   54218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:11:15.568374   54218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:11:15.577868   54218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:11:15.586716   54218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:11:15.586770   54218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:11:15.595807   54218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:11:15.605273   54218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:11:15.705228   54218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:11:16.527637   54218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:11:16.796370   54218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:11:16.887044   54218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:11:16.974657   54218 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:11:16.974737   54218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:11:17.475427   54218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:11:17.975715   54218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:11:18.009585   54218 api_server.go:72] duration metric: took 1.03492701s to wait for apiserver process to appear ...
	I0717 01:11:18.009611   54218 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:11:18.009630   54218 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0717 01:11:18.010054   54218 api_server.go:269] stopped: https://192.168.39.182:8443/healthz: Get "https://192.168.39.182:8443/healthz": dial tcp 192.168.39.182:8443: connect: connection refused
	I0717 01:11:18.510693   54218 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0717 01:11:21.778892   54218 api_server.go:279] https://192.168.39.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:11:21.778924   54218 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:11:21.778937   54218 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0717 01:11:21.792161   54218 api_server.go:279] https://192.168.39.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:11:21.792184   54218 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:11:22.010515   54218 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0717 01:11:22.018172   54218 api_server.go:279] https://192.168.39.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:11:22.018212   54218 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:11:22.509711   54218 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0717 01:11:22.515059   54218 api_server.go:279] https://192.168.39.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:11:22.515095   54218 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:11:23.010712   54218 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0717 01:11:23.016521   54218 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0717 01:11:23.024662   54218 api_server.go:141] control plane version: v1.24.4
	I0717 01:11:23.024689   54218 api_server.go:131] duration metric: took 5.015070622s to wait for apiserver health ...
	I0717 01:11:23.024700   54218 cni.go:84] Creating CNI manager for ""
	I0717 01:11:23.024708   54218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:11:23.026334   54218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:11:23.027635   54218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:11:23.038710   54218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:11:23.071262   54218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:11:23.080985   54218 system_pods.go:59] 8 kube-system pods found
	I0717 01:11:23.081010   54218 system_pods.go:61] "coredns-6d4b75cb6d-2l6th" [869a3cae-cbf9-47a1-842f-b6e2f3656185] Running
	I0717 01:11:23.081015   54218 system_pods.go:61] "coredns-6d4b75cb6d-mcfz8" [9a204f23-4455-47b4-9909-6ae9298090c1] Running
	I0717 01:11:23.081018   54218 system_pods.go:61] "etcd-test-preload-625427" [b2392c76-9a78-4e41-8589-00270b564125] Running
	I0717 01:11:23.081022   54218 system_pods.go:61] "kube-apiserver-test-preload-625427" [f60e237e-d6fb-4f42-b812-4852dc7d287d] Running
	I0717 01:11:23.081025   54218 system_pods.go:61] "kube-controller-manager-test-preload-625427" [7746406d-990f-4cc2-ac33-00da97ee125b] Running
	I0717 01:11:23.081032   54218 system_pods.go:61] "kube-proxy-knx4n" [ad9e512e-e38d-4bac-bafc-650f721699fe] Running
	I0717 01:11:23.081038   54218 system_pods.go:61] "kube-scheduler-test-preload-625427" [4904fd20-2b45-4893-aca1-c3e088bc1f91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:11:23.081043   54218 system_pods.go:61] "storage-provisioner" [edbedd44-0a2a-4bad-a907-410adb53e0f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:11:23.081050   54218 system_pods.go:74] duration metric: took 9.770691ms to wait for pod list to return data ...
	I0717 01:11:23.081057   54218 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:11:23.084318   54218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:11:23.084342   54218 node_conditions.go:123] node cpu capacity is 2
	I0717 01:11:23.084366   54218 node_conditions.go:105] duration metric: took 3.294878ms to run NodePressure ...
	I0717 01:11:23.084385   54218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:11:23.353986   54218 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:11:23.359993   54218 kubeadm.go:739] kubelet initialised
	I0717 01:11:23.360013   54218 kubeadm.go:740] duration metric: took 5.998048ms waiting for restarted kubelet to initialise ...
	I0717 01:11:23.360019   54218 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:11:23.367354   54218 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-2l6th" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:23.375749   54218 pod_ready.go:97] node "test-preload-625427" hosting pod "coredns-6d4b75cb6d-2l6th" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.375767   54218 pod_ready.go:81] duration metric: took 8.392903ms for pod "coredns-6d4b75cb6d-2l6th" in "kube-system" namespace to be "Ready" ...
	E0717 01:11:23.375776   54218 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-625427" hosting pod "coredns-6d4b75cb6d-2l6th" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.375782   54218 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mcfz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:23.383551   54218 pod_ready.go:97] node "test-preload-625427" hosting pod "coredns-6d4b75cb6d-mcfz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.383578   54218 pod_ready.go:81] duration metric: took 7.786469ms for pod "coredns-6d4b75cb6d-mcfz8" in "kube-system" namespace to be "Ready" ...
	E0717 01:11:23.383590   54218 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-625427" hosting pod "coredns-6d4b75cb6d-mcfz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.383597   54218 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:23.388347   54218 pod_ready.go:97] node "test-preload-625427" hosting pod "etcd-test-preload-625427" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.388371   54218 pod_ready.go:81] duration metric: took 4.759816ms for pod "etcd-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	E0717 01:11:23.388382   54218 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-625427" hosting pod "etcd-test-preload-625427" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.388388   54218 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:23.476143   54218 pod_ready.go:97] node "test-preload-625427" hosting pod "kube-apiserver-test-preload-625427" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.476177   54218 pod_ready.go:81] duration metric: took 87.77814ms for pod "kube-apiserver-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	E0717 01:11:23.476190   54218 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-625427" hosting pod "kube-apiserver-test-preload-625427" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.476199   54218 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:23.875636   54218 pod_ready.go:97] node "test-preload-625427" hosting pod "kube-controller-manager-test-preload-625427" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.875660   54218 pod_ready.go:81] duration metric: took 399.450608ms for pod "kube-controller-manager-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	E0717 01:11:23.875669   54218 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-625427" hosting pod "kube-controller-manager-test-preload-625427" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:23.875675   54218 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-knx4n" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:24.275109   54218 pod_ready.go:97] node "test-preload-625427" hosting pod "kube-proxy-knx4n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:24.275148   54218 pod_ready.go:81] duration metric: took 399.464688ms for pod "kube-proxy-knx4n" in "kube-system" namespace to be "Ready" ...
	E0717 01:11:24.275162   54218 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-625427" hosting pod "kube-proxy-knx4n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:24.275168   54218 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:24.674936   54218 pod_ready.go:97] node "test-preload-625427" hosting pod "kube-scheduler-test-preload-625427" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:24.674965   54218 pod_ready.go:81] duration metric: took 399.791987ms for pod "kube-scheduler-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	E0717 01:11:24.674974   54218 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-625427" hosting pod "kube-scheduler-test-preload-625427" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:24.674981   54218 pod_ready.go:38] duration metric: took 1.31495465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:11:24.674997   54218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:11:24.686596   54218 ops.go:34] apiserver oom_adj: -16
	I0717 01:11:24.686614   54218 kubeadm.go:597] duration metric: took 9.259072959s to restartPrimaryControlPlane
	I0717 01:11:24.686621   54218 kubeadm.go:394] duration metric: took 9.311158608s to StartCluster
	I0717 01:11:24.686636   54218 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:11:24.686696   54218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:11:24.687262   54218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:11:24.687459   54218 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:11:24.687564   54218 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:11:24.687627   54218 config.go:182] Loaded profile config "test-preload-625427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0717 01:11:24.687651   54218 addons.go:69] Setting storage-provisioner=true in profile "test-preload-625427"
	I0717 01:11:24.687671   54218 addons.go:69] Setting default-storageclass=true in profile "test-preload-625427"
	I0717 01:11:24.687690   54218 addons.go:234] Setting addon storage-provisioner=true in "test-preload-625427"
	I0717 01:11:24.687696   54218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-625427"
	W0717 01:11:24.687703   54218 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:11:24.687761   54218 host.go:66] Checking if "test-preload-625427" exists ...
	I0717 01:11:24.688054   54218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:11:24.688117   54218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:11:24.688144   54218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:11:24.688182   54218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:11:24.689165   54218 out.go:177] * Verifying Kubernetes components...
	I0717 01:11:24.690515   54218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:11:24.703349   54218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0717 01:11:24.703372   54218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
	I0717 01:11:24.703809   54218 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:11:24.703811   54218 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:11:24.704258   54218 main.go:141] libmachine: Using API Version  1
	I0717 01:11:24.704274   54218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:11:24.704362   54218 main.go:141] libmachine: Using API Version  1
	I0717 01:11:24.704383   54218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:11:24.704587   54218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:11:24.704697   54218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:11:24.704727   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetState
	I0717 01:11:24.705195   54218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:11:24.705232   54218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:11:24.707248   54218 kapi.go:59] client config for test-preload-625427: &rest.Config{Host:"https://192.168.39.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/test-preload-625427/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 01:11:24.707564   54218 addons.go:234] Setting addon default-storageclass=true in "test-preload-625427"
	W0717 01:11:24.707583   54218 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:11:24.707613   54218 host.go:66] Checking if "test-preload-625427" exists ...
	I0717 01:11:24.707957   54218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:11:24.707993   54218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:11:24.720077   54218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I0717 01:11:24.720538   54218 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:11:24.721056   54218 main.go:141] libmachine: Using API Version  1
	I0717 01:11:24.721078   54218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:11:24.721450   54218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:11:24.721644   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetState
	I0717 01:11:24.722133   54218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0717 01:11:24.722550   54218 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:11:24.723016   54218 main.go:141] libmachine: Using API Version  1
	I0717 01:11:24.723038   54218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:11:24.723331   54218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:11:24.723399   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:11:24.723772   54218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:11:24.723804   54218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:11:24.725090   54218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:11:24.726320   54218 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:11:24.726340   54218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:11:24.726358   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:11:24.729266   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:11:24.729763   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:11:24.729791   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:11:24.729941   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:11:24.730131   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:11:24.730285   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:11:24.730409   54218 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/test-preload-625427/id_rsa Username:docker}
	I0717 01:11:24.737630   54218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0717 01:11:24.737935   54218 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:11:24.738345   54218 main.go:141] libmachine: Using API Version  1
	I0717 01:11:24.738362   54218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:11:24.738627   54218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:11:24.738799   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetState
	I0717 01:11:24.740049   54218 main.go:141] libmachine: (test-preload-625427) Calling .DriverName
	I0717 01:11:24.740242   54218 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:11:24.740255   54218 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:11:24.740268   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHHostname
	I0717 01:11:24.742754   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:11:24.743197   54218 main.go:141] libmachine: (test-preload-625427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e9:23", ip: ""} in network mk-test-preload-625427: {Iface:virbr1 ExpiryTime:2024-07-17 02:10:50 +0000 UTC Type:0 Mac:52:54:00:80:e9:23 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:test-preload-625427 Clientid:01:52:54:00:80:e9:23}
	I0717 01:11:24.743224   54218 main.go:141] libmachine: (test-preload-625427) DBG | domain test-preload-625427 has defined IP address 192.168.39.182 and MAC address 52:54:00:80:e9:23 in network mk-test-preload-625427
	I0717 01:11:24.743367   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHPort
	I0717 01:11:24.743545   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHKeyPath
	I0717 01:11:24.743690   54218 main.go:141] libmachine: (test-preload-625427) Calling .GetSSHUsername
	I0717 01:11:24.743814   54218 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/test-preload-625427/id_rsa Username:docker}
	I0717 01:11:24.865698   54218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:11:24.885270   54218 node_ready.go:35] waiting up to 6m0s for node "test-preload-625427" to be "Ready" ...
	I0717 01:11:24.939556   54218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:11:25.040808   54218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:11:25.900605   54218 main.go:141] libmachine: Making call to close driver server
	I0717 01:11:25.900628   54218 main.go:141] libmachine: (test-preload-625427) Calling .Close
	I0717 01:11:25.900944   54218 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:11:25.900969   54218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:11:25.900978   54218 main.go:141] libmachine: Making call to close driver server
	I0717 01:11:25.900987   54218 main.go:141] libmachine: (test-preload-625427) Calling .Close
	I0717 01:11:25.901195   54218 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:11:25.901223   54218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:11:25.901230   54218 main.go:141] libmachine: (test-preload-625427) DBG | Closing plugin on server side
	I0717 01:11:25.911834   54218 main.go:141] libmachine: Making call to close driver server
	I0717 01:11:25.911854   54218 main.go:141] libmachine: (test-preload-625427) Calling .Close
	I0717 01:11:25.912092   54218 main.go:141] libmachine: (test-preload-625427) DBG | Closing plugin on server side
	I0717 01:11:25.912137   54218 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:11:25.912163   54218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:11:25.914371   54218 main.go:141] libmachine: Making call to close driver server
	I0717 01:11:25.914388   54218 main.go:141] libmachine: (test-preload-625427) Calling .Close
	I0717 01:11:25.914575   54218 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:11:25.914596   54218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:11:25.914605   54218 main.go:141] libmachine: Making call to close driver server
	I0717 01:11:25.914612   54218 main.go:141] libmachine: (test-preload-625427) Calling .Close
	I0717 01:11:25.914613   54218 main.go:141] libmachine: (test-preload-625427) DBG | Closing plugin on server side
	I0717 01:11:25.914810   54218 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:11:25.914849   54218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:11:25.914860   54218 main.go:141] libmachine: (test-preload-625427) DBG | Closing plugin on server side
	I0717 01:11:25.916530   54218 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 01:11:25.917658   54218 addons.go:510] duration metric: took 1.23010095s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0717 01:11:26.889105   54218 node_ready.go:53] node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:28.889765   54218 node_ready.go:53] node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:31.389126   54218 node_ready.go:53] node "test-preload-625427" has status "Ready":"False"
	I0717 01:11:32.388902   54218 node_ready.go:49] node "test-preload-625427" has status "Ready":"True"
	I0717 01:11:32.388928   54218 node_ready.go:38] duration metric: took 7.503632446s for node "test-preload-625427" to be "Ready" ...
	I0717 01:11:32.388939   54218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:11:32.393555   54218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-mcfz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:32.398067   54218 pod_ready.go:92] pod "coredns-6d4b75cb6d-mcfz8" in "kube-system" namespace has status "Ready":"True"
	I0717 01:11:32.398086   54218 pod_ready.go:81] duration metric: took 4.5087ms for pod "coredns-6d4b75cb6d-mcfz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:32.398093   54218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:34.406262   54218 pod_ready.go:102] pod "etcd-test-preload-625427" in "kube-system" namespace has status "Ready":"False"
	I0717 01:11:34.904813   54218 pod_ready.go:92] pod "etcd-test-preload-625427" in "kube-system" namespace has status "Ready":"True"
	I0717 01:11:34.904833   54218 pod_ready.go:81] duration metric: took 2.506733718s for pod "etcd-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:34.904842   54218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:34.908689   54218 pod_ready.go:92] pod "kube-apiserver-test-preload-625427" in "kube-system" namespace has status "Ready":"True"
	I0717 01:11:34.908704   54218 pod_ready.go:81] duration metric: took 3.856628ms for pod "kube-apiserver-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:34.908712   54218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:34.912793   54218 pod_ready.go:92] pod "kube-controller-manager-test-preload-625427" in "kube-system" namespace has status "Ready":"True"
	I0717 01:11:34.912810   54218 pod_ready.go:81] duration metric: took 4.091777ms for pod "kube-controller-manager-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:34.912820   54218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-knx4n" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:34.918351   54218 pod_ready.go:92] pod "kube-proxy-knx4n" in "kube-system" namespace has status "Ready":"True"
	I0717 01:11:34.918366   54218 pod_ready.go:81] duration metric: took 5.538923ms for pod "kube-proxy-knx4n" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:34.918376   54218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:35.189792   54218 pod_ready.go:92] pod "kube-scheduler-test-preload-625427" in "kube-system" namespace has status "Ready":"True"
	I0717 01:11:35.189816   54218 pod_ready.go:81] duration metric: took 271.432308ms for pod "kube-scheduler-test-preload-625427" in "kube-system" namespace to be "Ready" ...
	I0717 01:11:35.189826   54218 pod_ready.go:38] duration metric: took 2.800875609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:11:35.189841   54218 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:11:35.189888   54218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:11:35.205025   54218 api_server.go:72] duration metric: took 10.517535577s to wait for apiserver process to appear ...
	I0717 01:11:35.205058   54218 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:11:35.205092   54218 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0717 01:11:35.210101   54218 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0717 01:11:35.210969   54218 api_server.go:141] control plane version: v1.24.4
	I0717 01:11:35.210987   54218 api_server.go:131] duration metric: took 5.921925ms to wait for apiserver health ...
	I0717 01:11:35.210994   54218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:11:35.392735   54218 system_pods.go:59] 7 kube-system pods found
	I0717 01:11:35.392773   54218 system_pods.go:61] "coredns-6d4b75cb6d-mcfz8" [9a204f23-4455-47b4-9909-6ae9298090c1] Running
	I0717 01:11:35.392779   54218 system_pods.go:61] "etcd-test-preload-625427" [b2392c76-9a78-4e41-8589-00270b564125] Running
	I0717 01:11:35.392784   54218 system_pods.go:61] "kube-apiserver-test-preload-625427" [f60e237e-d6fb-4f42-b812-4852dc7d287d] Running
	I0717 01:11:35.392790   54218 system_pods.go:61] "kube-controller-manager-test-preload-625427" [7746406d-990f-4cc2-ac33-00da97ee125b] Running
	I0717 01:11:35.392800   54218 system_pods.go:61] "kube-proxy-knx4n" [ad9e512e-e38d-4bac-bafc-650f721699fe] Running
	I0717 01:11:35.392807   54218 system_pods.go:61] "kube-scheduler-test-preload-625427" [4904fd20-2b45-4893-aca1-c3e088bc1f91] Running
	I0717 01:11:35.392811   54218 system_pods.go:61] "storage-provisioner" [edbedd44-0a2a-4bad-a907-410adb53e0f5] Running
	I0717 01:11:35.392820   54218 system_pods.go:74] duration metric: took 181.818777ms to wait for pod list to return data ...
	I0717 01:11:35.392832   54218 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:11:35.589239   54218 default_sa.go:45] found service account: "default"
	I0717 01:11:35.589263   54218 default_sa.go:55] duration metric: took 196.423317ms for default service account to be created ...
	I0717 01:11:35.589272   54218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:11:35.792789   54218 system_pods.go:86] 7 kube-system pods found
	I0717 01:11:35.792815   54218 system_pods.go:89] "coredns-6d4b75cb6d-mcfz8" [9a204f23-4455-47b4-9909-6ae9298090c1] Running
	I0717 01:11:35.792821   54218 system_pods.go:89] "etcd-test-preload-625427" [b2392c76-9a78-4e41-8589-00270b564125] Running
	I0717 01:11:35.792825   54218 system_pods.go:89] "kube-apiserver-test-preload-625427" [f60e237e-d6fb-4f42-b812-4852dc7d287d] Running
	I0717 01:11:35.792829   54218 system_pods.go:89] "kube-controller-manager-test-preload-625427" [7746406d-990f-4cc2-ac33-00da97ee125b] Running
	I0717 01:11:35.792832   54218 system_pods.go:89] "kube-proxy-knx4n" [ad9e512e-e38d-4bac-bafc-650f721699fe] Running
	I0717 01:11:35.792836   54218 system_pods.go:89] "kube-scheduler-test-preload-625427" [4904fd20-2b45-4893-aca1-c3e088bc1f91] Running
	I0717 01:11:35.792840   54218 system_pods.go:89] "storage-provisioner" [edbedd44-0a2a-4bad-a907-410adb53e0f5] Running
	I0717 01:11:35.792852   54218 system_pods.go:126] duration metric: took 203.569904ms to wait for k8s-apps to be running ...
	I0717 01:11:35.792861   54218 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:11:35.792916   54218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:11:35.808211   54218 system_svc.go:56] duration metric: took 15.341676ms WaitForService to wait for kubelet
	I0717 01:11:35.808238   54218 kubeadm.go:582] duration metric: took 11.120755016s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:11:35.808257   54218 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:11:35.989815   54218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:11:35.989840   54218 node_conditions.go:123] node cpu capacity is 2
	I0717 01:11:35.989849   54218 node_conditions.go:105] duration metric: took 181.587042ms to run NodePressure ...
	I0717 01:11:35.989859   54218 start.go:241] waiting for startup goroutines ...
	I0717 01:11:35.989867   54218 start.go:246] waiting for cluster config update ...
	I0717 01:11:35.989879   54218 start.go:255] writing updated cluster config ...
	I0717 01:11:35.990168   54218 ssh_runner.go:195] Run: rm -f paused
	I0717 01:11:36.035140   54218 start.go:600] kubectl: 1.30.2, cluster: 1.24.4 (minor skew: 6)
	I0717 01:11:36.037064   54218 out.go:177] 
	W0717 01:11:36.038304   54218 out.go:239] ! /usr/local/bin/kubectl is version 1.30.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0717 01:11:36.039419   54218 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0717 01:11:36.040691   54218 out.go:177] * Done! kubectl is now configured to use "test-preload-625427" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 01:11:36 test-preload-625427 crio[699]: time="2024-07-17 01:11:36.979948268Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8c024f5e18ef245cecd032618a26846fbf3f5978049048f45fd45a0c7223abe4,Verbose:false,}" file="otel-collector/interceptors.go:62" id=c6e7a0ad-c35d-4c80-82cb-a124eb04e8ce name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 01:11:36 test-preload-625427 crio[699]: time="2024-07-17 01:11:36.980087202Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8c024f5e18ef245cecd032618a26846fbf3f5978049048f45fd45a0c7223abe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721178677835932474,StartedAt:1721178677981484047,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.3-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7153afa337e1538eed0195ab521c20,},Annotations:map[string]string{io.kubernetes.container.hash: 48a8c8f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/fa7153afa337e1538eed0195ab521c20/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/fa7153afa337e1538eed0195ab521c20/containers/etcd/e8064f6d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_et
cd-test-preload-625427_fa7153afa337e1538eed0195ab521c20/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c6e7a0ad-c35d-4c80-82cb-a124eb04e8ce name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 01:11:36 test-preload-625427 crio[699]: time="2024-07-17 01:11:36.980533553Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:eccf0e6d16e999b7c7b6aca67a770380f4ae6b1c4edf1f4353fea476fecdbe6a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=733b102e-c43a-47c4-888e-fccc0c419dbd name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 01:11:36 test-preload-625427 crio[699]: time="2024-07-17 01:11:36.980618603Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:eccf0e6d16e999b7c7b6aca67a770380f4ae6b1c4edf1f4353fea476fecdbe6a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721178677769881290,StartedAt:1721178677858471636,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.24.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d79de0ea03377545bc123b662306937,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/4d79de0ea03377545bc123b662306937/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/4d79de0ea03377545bc123b662306937/containers/kube-scheduler/617c8fab,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-test-preload-625427_4d79de0ea03377545bc123b662306937/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources
{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=733b102e-c43a-47c4-888e-fccc0c419dbd name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 01:11:36 test-preload-625427 crio[699]: time="2024-07-17 01:11:36.981128439Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:15d36dceb7dca90d3ecc4b5bbc90b7634444c695410e0f8a1ba45c77e4aa773f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1861c09e-f210-4a7d-b5a2-84274229954f name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 01:11:36 test-preload-625427 crio[699]: time="2024-07-17 01:11:36.981283142Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:15d36dceb7dca90d3ecc4b5bbc90b7634444c695410e0f8a1ba45c77e4aa773f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721178677687404241,StartedAt:1721178677793269073,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.24.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c7667a23de3833529b6bbf6167461a,},Annotations:map[string]string{io.kubernetes.container.hash: bf659517,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b8c7667a23de3833529b6bbf6167461a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b8c7667a23de3833529b6bbf6167461a/containers/kube-apiserver/0a8be4bf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Con
tainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-test-preload-625427_b8c7667a23de3833529b6bbf6167461a/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1861c09e-f210-4a7d-b5a2-84274229954f name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 01:11:36 test-preload-625427 crio[699]: time="2024-07-17 01:11:36.987579014Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a36adcbe-090c-4a37-a96e-cd69f296581b name=/runtime.v1.ImageService/ListImages
	Jul 17 01:11:36 test-preload-625427 crio[699]: time="2024-07-17 01:11:36.988008767Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,RepoTags:[k8s.gcr.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-apiserver:v1.24.4],RepoDigests:[k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857 k8s.gcr.io/kube-apiserver@sha256:74496d788bad4b343b2a2ead2b4ac8f4d0d99c45c451b51c076f22e52b84f1e5 k8s.gcr.io/kube-apiserver@sha256:aa1ef03e6734883f677c768fa970d54c8ae490aad157b34c91e73adb7e4d5a90 registry.k8s.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857 registry.k8s.io/kube-apiserver@sha256:74496d788bad4b343b2a2ead2b4ac8f4d0d99c45c451b51c076f22e52b84f1e5 registry.k8s.io/kube-apiserver@sha256:aa1ef03e6734883f677c768fa970d54c8ae490aad157b34c91e73adb7e4d5a90],Size_:131097841,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:1f99c
b6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,RepoTags:[k8s.gcr.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4],RepoDigests:[k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891 k8s.gcr.io/kube-controller-manager@sha256:da588c9f0e65e93317f5e016603d1ed7466427e9e0cf8b028c505bf30837f7dd k8s.gcr.io/kube-controller-manager@sha256:f9400b11d780871e4e87cac8a8d4f8fc6bb83d7793b58981020b43be55f71cb9 registry.k8s.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891 registry.k8s.io/kube-controller-manager@sha256:da588c9f0e65e93317f5e016603d1ed7466427e9e0cf8b028c505bf30837f7dd registry.k8s.io/kube-controller-manager@sha256:f9400b11d780871e4e87cac8a8d4f8fc6bb83d7793b58981020b43be55f71cb9],Size_:120743002,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,RepoTags:[k8s.gcr.io/kube-scheduler:v1.2
4.4 registry.k8s.io/kube-scheduler:v1.24.4],RepoDigests:[k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2 k8s.gcr.io/kube-scheduler@sha256:a16e4ce348403bc65bc6b755aef81e4970685c4e32fc398b10e49de15993ba21 k8s.gcr.io/kube-scheduler@sha256:cf1e1f85916287003e82d852a709917e200afd5caca04499d525ee98c21677bb registry.k8s.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2 registry.k8s.io/kube-scheduler@sha256:a16e4ce348403bc65bc6b755aef81e4970685c4e32fc398b10e49de15993ba21 registry.k8s.io/kube-scheduler@sha256:cf1e1f85916287003e82d852a709917e200afd5caca04499d525ee98c21677bb],Size_:52343896,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,RepoTags:[k8s.gcr.io/kube-proxy:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386 k8s.gcr.io/kube-proxy@sha256:b
fac4b9fbf43ee6e1b30f90bc5a889067a4b4081927b4b6d322ed107a8549ab0 k8s.gcr.io/kube-proxy@sha256:fec80877f53c7999f8268ab856ef2517f01a72b5de910c77f921ef784d44617f registry.k8s.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386 registry.k8s.io/kube-proxy@sha256:bfac4b9fbf43ee6e1b30f90bc5a889067a4b4081927b4b6d322ed107a8549ab0 registry.k8s.io/kube-proxy@sha256:fec80877f53c7999f8268ab856ef2517f01a72b5de910c77f921ef784d44617f],Size_:111862619,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:7be59e780e44025b8bdfe535f04a7e83ea03dd949037ebfcfdbf5880c8f87ac7 k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:f81611a21cf91214c1ea751c5b525931a0e2ebabe62b3937b6158039ff6f922d registry.k8s.io/pause@sha256:7be59e780e44025b8bdfe535f04a7e83ea03dd949037ebfcfdbf5880c8f87ac7 reg
istry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:f81611a21cf91214c1ea751c5b525931a0e2ebabe62b3937b6158039ff6f922d],Size_:718423,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,RepoTags:[k8s.gcr.io/etcd:3.5.3-0 registry.k8s.io/etcd:3.5.3-0],RepoDigests:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd@sha256:533631a3c25663124e848280973b1a5d5ae34f8766fef9b6b839d4b08c893e38 k8s.gcr.io/etcd@sha256:678382ed340f6996ad40cdba4a4745a2ada41ed9c322c026a2a695338a93dcbe registry.k8s.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 registry.k8s.io/etcd@sha256:533631a3c25663124e848280973b1a5d5ae34f8766fef9b6b839d4b08c893e38 registry.k8s.io/etcd@sha256:678382ed340f6996ad40cdba4a4745a2ada41ed9c322c026a2a695338a93dcbe],Size_:300857875,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinne
d:false,},&Image{Id:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.6 registry.k8s.io/coredns/coredns:v1.8.6],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns@sha256:8916c89e1538ea3941b58847e448a2c6d940c01b8e716b20423d2d8b189d3972 k8s.gcr.io/coredns/coredns@sha256:a0d77904d929b640f13c5098c70950d084042bed9ef73b60bfe00974a84ab722 registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns@sha256:8916c89e1538ea3941b58847e448a2c6d940c01b8e716b20423d2d8b189d3972 registry.k8s.io/coredns/coredns@sha256:a0d77904d929b640f13c5098c70950d084042bed9ef73b60bfe00974a84ab722],Size_:46959895,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:d921cee8494827575ce8b9cc6cf7dae988b6378ce3f62217bf430467916529b9,RepoTags:[docker.io/kindest/kindnetd:v20220726-ed811e41],RepoDigests:[docker.io/kindest/kindnetd@sha256:5240e7ff1fefade59846259c1edabad82fe4c642c66b7850947015d1dd699251 docker.io/kindest/kindnetd@sha256:e2d4d675dcf28a90102ad5219b75c5a0ee096c4321247dfae31dd1467611a9fb],Size_:63344219,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=a36adcbe-090c-4a37-a96e-cd69f296581b name=/runtime.v1.ImageService/ListImages
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.017692790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00dfaa1e-768b-4ace-bad1-734a8e43a56d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.017760142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00dfaa1e-768b-4ace-bad1-734a8e43a56d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.019358800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fd3eade-647b-4a90-aabc-b6ecea40bf74 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.019768194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178697019746695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fd3eade-647b-4a90-aabc-b6ecea40bf74 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.020448880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d970004-6075-4ba3-9c4d-8a18618a1e00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.020514438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d970004-6075-4ba3-9c4d-8a18618a1e00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.020666220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c409b3cc0b1dee706be1a72e9c1dbad128a79b82ac25dfa24c0f69eb08b369e,PodSandboxId:e1e94a6f8acde581336da067eed24f662a7a36f7b078a097f79bd5b38ba34d0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721178690346979144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mcfz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a204f23-4455-47b4-9909-6ae9298090c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b71c3d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601d8645cf67f3ca07457636bd5b682836843d53462f10c7f21f9dabfa59662f,PodSandboxId:5de9fc128ee3ffcc751a27d4f8c8291ad45575790a1a0f0e7a1963e41bb5152c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721178683317705490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-knx4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ad9e512e-e38d-4bac-bafc-650f721699fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6a8b7552,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61dc817a7ee8e811f922b613aa5a94da7fde97e9cd76f9f983270ede28e3fe03,PodSandboxId:9953513fe19c6e614eb8747233e1044b92d5e0f89b985f204ac841fc596d3d92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178682930888546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed
bedd44-0a2a-4bad-a907-410adb53e0f5,},Annotations:map[string]string{io.kubernetes.container.hash: ef21ad5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2242789381f9a4b19a1089faa9e512b669b158a1ef3fa76d73eaa7d33a83bef,PodSandboxId:a5b507ce7f4a339ed0e049dfd9c176dd7d82ff910651f2b19b5bb6aecfee5983,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721178677714526489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 28d9b7d2ca82e330105cd96e9affeff8,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c024f5e18ef245cecd032618a26846fbf3f5978049048f45fd45a0c7223abe4,PodSandboxId:2c76d98251359df0e58bb953a827cdaec6521182b2aef4620dcc1ed1b98a8241,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721178677694933967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7153afa337e1538eed019
5ab521c20,},Annotations:map[string]string{io.kubernetes.container.hash: 48a8c8f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eccf0e6d16e999b7c7b6aca67a770380f4ae6b1c4edf1f4353fea476fecdbe6a,PodSandboxId:52c96b7fa09a8f7d56d4f327b95c4b406f0e778717f9763f8399fcd8e0a76066,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721178677690643755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d79de0ea03377545bc123b662306937,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d36dceb7dca90d3ecc4b5bbc90b7634444c695410e0f8a1ba45c77e4aa773f,PodSandboxId:4bb2535e5db079e3f3b0e9ee3867bece8b11a7b26d37a82d6f3446fb0b1c92cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721178677630579292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c7667a23de3833529b6bbf6167461a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf659517,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d970004-6075-4ba3-9c4d-8a18618a1e00 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.028982590Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd750d93-210c-4321-be52-f8542320f35b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.029152926Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e1e94a6f8acde581336da067eed24f662a7a36f7b078a097f79bd5b38ba34d0c,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-mcfz8,Uid:9a204f23-4455-47b4-9909-6ae9298090c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178690131953130,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-mcfz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a204f23-4455-47b4-9909-6ae9298090c1,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:11:21.910427638Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5de9fc128ee3ffcc751a27d4f8c8291ad45575790a1a0f0e7a1963e41bb5152c,Metadata:&PodSandboxMetadata{Name:kube-proxy-knx4n,Uid:ad9e512e-e38d-4bac-bafc-650f721699fe,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1721178683124334684,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-knx4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9e512e-e38d-4bac-bafc-650f721699fe,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:11:21.910431803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9953513fe19c6e614eb8747233e1044b92d5e0f89b985f204ac841fc596d3d92,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:edbedd44-0a2a-4bad-a907-410adb53e0f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178682820696861,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edbedd44-0a2a-4bad-a907-410a
db53e0f5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T01:11:21.910406119Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52c96b7fa09a8f7d56d4f327b95c4b406f0e778717f9763f8399fcd8e0a76066,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-625427,Uid:4d79de0
ea03377545bc123b662306937,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178677455206047,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d79de0ea03377545bc123b662306937,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4d79de0ea03377545bc123b662306937,kubernetes.io/config.seen: 2024-07-17T01:11:16.896595020Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c76d98251359df0e58bb953a827cdaec6521182b2aef4620dcc1ed1b98a8241,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-625427,Uid:fa7153afa337e1538eed0195ab521c20,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178677452583587,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
a7153afa337e1538eed0195ab521c20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.182:2379,kubernetes.io/config.hash: fa7153afa337e1538eed0195ab521c20,kubernetes.io/config.seen: 2024-07-17T01:11:16.963619458Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5b507ce7f4a339ed0e049dfd9c176dd7d82ff910651f2b19b5bb6aecfee5983,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-625427,Uid:28d9b7d2ca82e330105cd96e9affeff8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178677450169737,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d9b7d2ca82e330105cd96e9affeff8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 28d9b7d2ca82e330105cd96e9affeff8,kubernetes.io/config.seen: 2024-07-17T01
:11:16.896593637Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4bb2535e5db079e3f3b0e9ee3867bece8b11a7b26d37a82d6f3446fb0b1c92cd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-625427,Uid:b8c7667a23de3833529b6bbf6167461a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178677436541395,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c7667a23de3833529b6bbf6167461a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.182:8443,kubernetes.io/config.hash: b8c7667a23de3833529b6bbf6167461a,kubernetes.io/config.seen: 2024-07-17T01:11:16.896564048Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cd750d93-210c-4321-be52-f8542320f35b name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.029767300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50b8a536-936e-4cea-9a6d-8e4b46733a5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.029846984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50b8a536-936e-4cea-9a6d-8e4b46733a5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.030014712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c409b3cc0b1dee706be1a72e9c1dbad128a79b82ac25dfa24c0f69eb08b369e,PodSandboxId:e1e94a6f8acde581336da067eed24f662a7a36f7b078a097f79bd5b38ba34d0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721178690346979144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mcfz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a204f23-4455-47b4-9909-6ae9298090c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b71c3d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601d8645cf67f3ca07457636bd5b682836843d53462f10c7f21f9dabfa59662f,PodSandboxId:5de9fc128ee3ffcc751a27d4f8c8291ad45575790a1a0f0e7a1963e41bb5152c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721178683317705490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-knx4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ad9e512e-e38d-4bac-bafc-650f721699fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6a8b7552,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61dc817a7ee8e811f922b613aa5a94da7fde97e9cd76f9f983270ede28e3fe03,PodSandboxId:9953513fe19c6e614eb8747233e1044b92d5e0f89b985f204ac841fc596d3d92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178682930888546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed
bedd44-0a2a-4bad-a907-410adb53e0f5,},Annotations:map[string]string{io.kubernetes.container.hash: ef21ad5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2242789381f9a4b19a1089faa9e512b669b158a1ef3fa76d73eaa7d33a83bef,PodSandboxId:a5b507ce7f4a339ed0e049dfd9c176dd7d82ff910651f2b19b5bb6aecfee5983,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721178677714526489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 28d9b7d2ca82e330105cd96e9affeff8,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c024f5e18ef245cecd032618a26846fbf3f5978049048f45fd45a0c7223abe4,PodSandboxId:2c76d98251359df0e58bb953a827cdaec6521182b2aef4620dcc1ed1b98a8241,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721178677694933967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7153afa337e1538eed019
5ab521c20,},Annotations:map[string]string{io.kubernetes.container.hash: 48a8c8f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eccf0e6d16e999b7c7b6aca67a770380f4ae6b1c4edf1f4353fea476fecdbe6a,PodSandboxId:52c96b7fa09a8f7d56d4f327b95c4b406f0e778717f9763f8399fcd8e0a76066,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721178677690643755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d79de0ea03377545bc123b662306937,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d36dceb7dca90d3ecc4b5bbc90b7634444c695410e0f8a1ba45c77e4aa773f,PodSandboxId:4bb2535e5db079e3f3b0e9ee3867bece8b11a7b26d37a82d6f3446fb0b1c92cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721178677630579292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c7667a23de3833529b6bbf6167461a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf659517,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50b8a536-936e-4cea-9a6d-8e4b46733a5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.030659004Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d113e4c-8999-465b-bda7-7cfdd473824e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.030823465Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e1e94a6f8acde581336da067eed24f662a7a36f7b078a097f79bd5b38ba34d0c,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-mcfz8,Uid:9a204f23-4455-47b4-9909-6ae9298090c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178690131953130,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-mcfz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a204f23-4455-47b4-9909-6ae9298090c1,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:11:21.910427638Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5de9fc128ee3ffcc751a27d4f8c8291ad45575790a1a0f0e7a1963e41bb5152c,Metadata:&PodSandboxMetadata{Name:kube-proxy-knx4n,Uid:ad9e512e-e38d-4bac-bafc-650f721699fe,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1721178683124334684,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-knx4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9e512e-e38d-4bac-bafc-650f721699fe,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:11:21.910431803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9953513fe19c6e614eb8747233e1044b92d5e0f89b985f204ac841fc596d3d92,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:edbedd44-0a2a-4bad-a907-410adb53e0f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178682820696861,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edbedd44-0a2a-4bad-a907-410a
db53e0f5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T01:11:21.910406119Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52c96b7fa09a8f7d56d4f327b95c4b406f0e778717f9763f8399fcd8e0a76066,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-625427,Uid:4d79de0
ea03377545bc123b662306937,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178677455206047,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d79de0ea03377545bc123b662306937,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4d79de0ea03377545bc123b662306937,kubernetes.io/config.seen: 2024-07-17T01:11:16.896595020Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c76d98251359df0e58bb953a827cdaec6521182b2aef4620dcc1ed1b98a8241,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-625427,Uid:fa7153afa337e1538eed0195ab521c20,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178677452583587,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
a7153afa337e1538eed0195ab521c20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.182:2379,kubernetes.io/config.hash: fa7153afa337e1538eed0195ab521c20,kubernetes.io/config.seen: 2024-07-17T01:11:16.963619458Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5b507ce7f4a339ed0e049dfd9c176dd7d82ff910651f2b19b5bb6aecfee5983,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-625427,Uid:28d9b7d2ca82e330105cd96e9affeff8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178677450169737,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d9b7d2ca82e330105cd96e9affeff8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 28d9b7d2ca82e330105cd96e9affeff8,kubernetes.io/config.seen: 2024-07-17T01
:11:16.896593637Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4bb2535e5db079e3f3b0e9ee3867bece8b11a7b26d37a82d6f3446fb0b1c92cd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-625427,Uid:b8c7667a23de3833529b6bbf6167461a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721178677436541395,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c7667a23de3833529b6bbf6167461a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.182:8443,kubernetes.io/config.hash: b8c7667a23de3833529b6bbf6167461a,kubernetes.io/config.seen: 2024-07-17T01:11:16.896564048Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6d113e4c-8999-465b-bda7-7cfdd473824e name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.031846020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d52b4276-6fa1-4152-9bfc-0bac30f38e04 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.031908409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d52b4276-6fa1-4152-9bfc-0bac30f38e04 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:11:37 test-preload-625427 crio[699]: time="2024-07-17 01:11:37.032051660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c409b3cc0b1dee706be1a72e9c1dbad128a79b82ac25dfa24c0f69eb08b369e,PodSandboxId:e1e94a6f8acde581336da067eed24f662a7a36f7b078a097f79bd5b38ba34d0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721178690346979144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-mcfz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a204f23-4455-47b4-9909-6ae9298090c1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b71c3d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601d8645cf67f3ca07457636bd5b682836843d53462f10c7f21f9dabfa59662f,PodSandboxId:5de9fc128ee3ffcc751a27d4f8c8291ad45575790a1a0f0e7a1963e41bb5152c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721178683317705490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-knx4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ad9e512e-e38d-4bac-bafc-650f721699fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6a8b7552,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61dc817a7ee8e811f922b613aa5a94da7fde97e9cd76f9f983270ede28e3fe03,PodSandboxId:9953513fe19c6e614eb8747233e1044b92d5e0f89b985f204ac841fc596d3d92,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721178682930888546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed
bedd44-0a2a-4bad-a907-410adb53e0f5,},Annotations:map[string]string{io.kubernetes.container.hash: ef21ad5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2242789381f9a4b19a1089faa9e512b669b158a1ef3fa76d73eaa7d33a83bef,PodSandboxId:a5b507ce7f4a339ed0e049dfd9c176dd7d82ff910651f2b19b5bb6aecfee5983,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721178677714526489,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 28d9b7d2ca82e330105cd96e9affeff8,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c024f5e18ef245cecd032618a26846fbf3f5978049048f45fd45a0c7223abe4,PodSandboxId:2c76d98251359df0e58bb953a827cdaec6521182b2aef4620dcc1ed1b98a8241,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721178677694933967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7153afa337e1538eed019
5ab521c20,},Annotations:map[string]string{io.kubernetes.container.hash: 48a8c8f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eccf0e6d16e999b7c7b6aca67a770380f4ae6b1c4edf1f4353fea476fecdbe6a,PodSandboxId:52c96b7fa09a8f7d56d4f327b95c4b406f0e778717f9763f8399fcd8e0a76066,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721178677690643755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d79de0ea03377545bc123b662306937,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d36dceb7dca90d3ecc4b5bbc90b7634444c695410e0f8a1ba45c77e4aa773f,PodSandboxId:4bb2535e5db079e3f3b0e9ee3867bece8b11a7b26d37a82d6f3446fb0b1c92cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721178677630579292,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-625427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c7667a23de3833529b6bbf6167461a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf659517,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d52b4276-6fa1-4152-9bfc-0bac30f38e04 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2c409b3cc0b1d       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   e1e94a6f8acde       coredns-6d4b75cb6d-mcfz8
	601d8645cf67f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   5de9fc128ee3f       kube-proxy-knx4n
	61dc817a7ee8e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   9953513fe19c6       storage-provisioner
	c2242789381f9       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   a5b507ce7f4a3       kube-controller-manager-test-preload-625427
	8c024f5e18ef2       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   2c76d98251359       etcd-test-preload-625427
	eccf0e6d16e99       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   52c96b7fa09a8       kube-scheduler-test-preload-625427
	15d36dceb7dca       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   4bb2535e5db07       kube-apiserver-test-preload-625427
	
	
	==> coredns [2c409b3cc0b1dee706be1a72e9c1dbad128a79b82ac25dfa24c0f69eb08b369e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:40186 - 20770 "HINFO IN 8139035819483276755.7537707043910663610. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009187185s
	
	
	==> describe nodes <==
	Name:               test-preload-625427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-625427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=test-preload-625427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_10_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:10:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-625427
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:11:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:11:32 +0000   Wed, 17 Jul 2024 01:10:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:11:32 +0000   Wed, 17 Jul 2024 01:10:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:11:32 +0000   Wed, 17 Jul 2024 01:10:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:11:32 +0000   Wed, 17 Jul 2024 01:11:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    test-preload-625427
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 18fc10abe6df48679c6fc896d1c9dfe7
	  System UUID:                18fc10ab-e6df-4867-9c6f-c896d1c9dfe7
	  Boot ID:                    42c61638-b244-4d02-8031-087cd1a63bb9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-mcfz8                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     75s
	  kube-system                 etcd-test-preload-625427                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-test-preload-625427             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-controller-manager-test-preload-625427    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-proxy-knx4n                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-test-preload-625427             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 74s                kube-proxy       
	  Normal  NodeHasSufficientMemory  95s (x4 over 96s)  kubelet          Node test-preload-625427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x4 over 96s)  kubelet          Node test-preload-625427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x4 over 96s)  kubelet          Node test-preload-625427 status is now: NodeHasSufficientPID
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node test-preload-625427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node test-preload-625427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                kubelet          Node test-preload-625427 status is now: NodeHasSufficientPID
	  Normal  NodeReady                77s                kubelet          Node test-preload-625427 status is now: NodeReady
	  Normal  RegisteredNode           76s                node-controller  Node test-preload-625427 event: Registered Node test-preload-625427 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)  kubelet          Node test-preload-625427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)  kubelet          Node test-preload-625427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)  kubelet          Node test-preload-625427 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-625427 event: Registered Node test-preload-625427 in Controller
	
	
	==> dmesg <==
	[Jul17 01:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050869] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039965] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.511362] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.048799] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.573236] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.424843] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.060749] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061780] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.158077] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[Jul17 01:11] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.271713] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[ +14.234391] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[  +0.058902] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.862159] systemd-fstab-generator[1094]: Ignoring "noauto" option for root device
	[  +4.088480] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.956282] systemd-fstab-generator[1737]: Ignoring "noauto" option for root device
	[  +5.391490] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [8c024f5e18ef245cecd032618a26846fbf3f5978049048f45fd45a0c7223abe4] <==
	{"level":"info","ts":"2024-07-17T01:11:18.347Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"50ad4904f737d679","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-17T01:11:18.348Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-17T01:11:18.352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50ad4904f737d679 switched to configuration voters=(5813382979681506937)"}
	{"level":"info","ts":"2024-07-17T01:11:18.352Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3ca243f487c5ef6","local-member-id":"50ad4904f737d679","added-peer-id":"50ad4904f737d679","added-peer-peer-urls":["https://192.168.39.182:2380"]}
	{"level":"info","ts":"2024-07-17T01:11:18.352Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3ca243f487c5ef6","local-member-id":"50ad4904f737d679","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:11:18.352Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:11:18.355Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:11:18.362Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"50ad4904f737d679","initial-advertise-peer-urls":["https://192.168.39.182:2380"],"listen-peer-urls":["https://192.168.39.182:2380"],"advertise-client-urls":["https://192.168.39.182:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.182:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:11:18.362Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:11:18.362Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.182:2380"}
	{"level":"info","ts":"2024-07-17T01:11:18.362Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.182:2380"}
	{"level":"info","ts":"2024-07-17T01:11:19.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50ad4904f737d679 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:11:19.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50ad4904f737d679 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:11:19.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50ad4904f737d679 received MsgPreVoteResp from 50ad4904f737d679 at term 2"}
	{"level":"info","ts":"2024-07-17T01:11:19.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50ad4904f737d679 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:11:19.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50ad4904f737d679 received MsgVoteResp from 50ad4904f737d679 at term 3"}
	{"level":"info","ts":"2024-07-17T01:11:19.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50ad4904f737d679 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:11:19.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 50ad4904f737d679 elected leader 50ad4904f737d679 at term 3"}
	{"level":"info","ts":"2024-07-17T01:11:19.380Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"50ad4904f737d679","local-member-attributes":"{Name:test-preload-625427 ClientURLs:[https://192.168.39.182:2379]}","request-path":"/0/members/50ad4904f737d679/attributes","cluster-id":"c3ca243f487c5ef6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:11:19.380Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:11:19.381Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:11:19.382Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.182:2379"}
	{"level":"info","ts":"2024-07-17T01:11:19.382Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:11:19.382Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:11:19.382Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:11:37 up 0 min,  0 users,  load average: 0.91, 0.25, 0.08
	Linux test-preload-625427 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [15d36dceb7dca90d3ecc4b5bbc90b7634444c695410e0f8a1ba45c77e4aa773f] <==
	I0717 01:11:21.735505       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0717 01:11:21.735530       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 01:11:21.735547       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 01:11:21.735702       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 01:11:21.736486       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 01:11:21.737053       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0717 01:11:21.737095       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0717 01:11:21.821519       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0717 01:11:21.822964       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0717 01:11:21.823318       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:11:21.825133       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:11:21.826458       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:11:21.840756       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0717 01:11:21.856013       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:11:21.899438       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0717 01:11:22.370476       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 01:11:22.727373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:11:23.222856       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0717 01:11:23.242096       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0717 01:11:23.296855       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0717 01:11:23.328735       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:11:23.336997       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:11:23.557976       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0717 01:11:34.120478       1 controller.go:611] quota admission added evaluator for: endpoints
	I0717 01:11:34.121042       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c2242789381f9a4b19a1089faa9e512b669b158a1ef3fa76d73eaa7d33a83bef] <==
	I0717 01:11:34.125949       1 shared_informer.go:262] Caches are synced for PVC protection
	I0717 01:11:34.130325       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0717 01:11:34.130406       1 shared_informer.go:262] Caches are synced for PV protection
	I0717 01:11:34.132148       1 shared_informer.go:262] Caches are synced for GC
	I0717 01:11:34.145364       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0717 01:11:34.148585       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0717 01:11:34.154864       1 shared_informer.go:262] Caches are synced for namespace
	I0717 01:11:34.156412       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0717 01:11:34.159439       1 shared_informer.go:262] Caches are synced for service account
	I0717 01:11:34.163397       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0717 01:11:34.165838       1 shared_informer.go:262] Caches are synced for HPA
	I0717 01:11:34.185613       1 shared_informer.go:262] Caches are synced for disruption
	I0717 01:11:34.185739       1 disruption.go:371] Sending events to api server.
	I0717 01:11:34.215623       1 shared_informer.go:262] Caches are synced for taint
	I0717 01:11:34.215882       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0717 01:11:34.215942       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0717 01:11:34.216160       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-625427. Assuming now as a timestamp.
	I0717 01:11:34.216295       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0717 01:11:34.216642       1 event.go:294] "Event occurred" object="test-preload-625427" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-625427 event: Registered Node test-preload-625427 in Controller"
	I0717 01:11:34.331338       1 shared_informer.go:262] Caches are synced for resource quota
	I0717 01:11:34.360495       1 shared_informer.go:262] Caches are synced for persistent volume
	I0717 01:11:34.382066       1 shared_informer.go:262] Caches are synced for resource quota
	I0717 01:11:34.804400       1 shared_informer.go:262] Caches are synced for garbage collector
	I0717 01:11:34.804492       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 01:11:34.810582       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [601d8645cf67f3ca07457636bd5b682836843d53462f10c7f21f9dabfa59662f] <==
	I0717 01:11:23.517150       1 node.go:163] Successfully retrieved node IP: 192.168.39.182
	I0717 01:11:23.517347       1 server_others.go:138] "Detected node IP" address="192.168.39.182"
	I0717 01:11:23.517525       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0717 01:11:23.550982       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0717 01:11:23.551049       1 server_others.go:206] "Using iptables Proxier"
	I0717 01:11:23.551587       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0717 01:11:23.552346       1 server.go:661] "Version info" version="v1.24.4"
	I0717 01:11:23.552389       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:11:23.553839       1 config.go:317] "Starting service config controller"
	I0717 01:11:23.554216       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0717 01:11:23.554337       1 config.go:226] "Starting endpoint slice config controller"
	I0717 01:11:23.554358       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0717 01:11:23.555353       1 config.go:444] "Starting node config controller"
	I0717 01:11:23.555389       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0717 01:11:23.654705       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0717 01:11:23.654788       1 shared_informer.go:262] Caches are synced for service config
	I0717 01:11:23.655624       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [eccf0e6d16e999b7c7b6aca67a770380f4ae6b1c4edf1f4353fea476fecdbe6a] <==
	I0717 01:11:18.950350       1 serving.go:348] Generated self-signed cert in-memory
	W0717 01:11:21.778147       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:11:21.779024       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:11:21.779127       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:11:21.779258       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:11:21.824841       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0717 01:11:21.824882       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:11:21.835118       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 01:11:21.835368       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:11:21.835411       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:11:21.835442       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:11:21.937558       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:11:21 test-preload-625427 kubelet[1101]: I0717 01:11:21.959552    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/edbedd44-0a2a-4bad-a907-410adb53e0f5-tmp\") pod \"storage-provisioner\" (UID: \"edbedd44-0a2a-4bad-a907-410adb53e0f5\") " pod="kube-system/storage-provisioner"
	Jul 17 01:11:21 test-preload-625427 kubelet[1101]: I0717 01:11:21.959602    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz545\" (UniqueName: \"kubernetes.io/projected/edbedd44-0a2a-4bad-a907-410adb53e0f5-kube-api-access-fz545\") pod \"storage-provisioner\" (UID: \"edbedd44-0a2a-4bad-a907-410adb53e0f5\") " pod="kube-system/storage-provisioner"
	Jul 17 01:11:21 test-preload-625427 kubelet[1101]: I0717 01:11:21.959658    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume\") pod \"coredns-6d4b75cb6d-mcfz8\" (UID: \"9a204f23-4455-47b4-9909-6ae9298090c1\") " pod="kube-system/coredns-6d4b75cb6d-mcfz8"
	Jul 17 01:11:21 test-preload-625427 kubelet[1101]: I0717 01:11:21.959722    1101 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rhl\" (UniqueName: \"kubernetes.io/projected/9a204f23-4455-47b4-9909-6ae9298090c1-kube-api-access-d6rhl\") pod \"coredns-6d4b75cb6d-mcfz8\" (UID: \"9a204f23-4455-47b4-9909-6ae9298090c1\") " pod="kube-system/coredns-6d4b75cb6d-mcfz8"
	Jul 17 01:11:21 test-preload-625427 kubelet[1101]: I0717 01:11:21.959769    1101 reconciler.go:159] "Reconciler: start to sync state"
	Jul 17 01:11:21 test-preload-625427 kubelet[1101]: E0717 01:11:21.965785    1101 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: I0717 01:11:22.285082    1101 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpq29\" (UniqueName: \"kubernetes.io/projected/869a3cae-cbf9-47a1-842f-b6e2f3656185-kube-api-access-lpq29\") pod \"869a3cae-cbf9-47a1-842f-b6e2f3656185\" (UID: \"869a3cae-cbf9-47a1-842f-b6e2f3656185\") "
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: I0717 01:11:22.285306    1101 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869a3cae-cbf9-47a1-842f-b6e2f3656185-config-volume\") pod \"869a3cae-cbf9-47a1-842f-b6e2f3656185\" (UID: \"869a3cae-cbf9-47a1-842f-b6e2f3656185\") "
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: W0717 01:11:22.286561    1101 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/869a3cae-cbf9-47a1-842f-b6e2f3656185/volumes/kubernetes.io~projected/kube-api-access-lpq29: clearQuota called, but quotas disabled
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: I0717 01:11:22.286810    1101 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869a3cae-cbf9-47a1-842f-b6e2f3656185-kube-api-access-lpq29" (OuterVolumeSpecName: "kube-api-access-lpq29") pod "869a3cae-cbf9-47a1-842f-b6e2f3656185" (UID: "869a3cae-cbf9-47a1-842f-b6e2f3656185"). InnerVolumeSpecName "kube-api-access-lpq29". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: W0717 01:11:22.287152    1101 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/869a3cae-cbf9-47a1-842f-b6e2f3656185/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: I0717 01:11:22.287794    1101 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869a3cae-cbf9-47a1-842f-b6e2f3656185-config-volume" (OuterVolumeSpecName: "config-volume") pod "869a3cae-cbf9-47a1-842f-b6e2f3656185" (UID: "869a3cae-cbf9-47a1-842f-b6e2f3656185"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: E0717 01:11:22.287913    1101 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: E0717 01:11:22.288029    1101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume podName:9a204f23-4455-47b4-9909-6ae9298090c1 nodeName:}" failed. No retries permitted until 2024-07-17 01:11:22.787997497 +0000 UTC m=+5.999469621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume") pod "coredns-6d4b75cb6d-mcfz8" (UID: "9a204f23-4455-47b4-9909-6ae9298090c1") : object "kube-system"/"coredns" not registered
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: I0717 01:11:22.385866    1101 reconciler.go:384] "Volume detached for volume \"kube-api-access-lpq29\" (UniqueName: \"kubernetes.io/projected/869a3cae-cbf9-47a1-842f-b6e2f3656185-kube-api-access-lpq29\") on node \"test-preload-625427\" DevicePath \"\""
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: I0717 01:11:22.386025    1101 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869a3cae-cbf9-47a1-842f-b6e2f3656185-config-volume\") on node \"test-preload-625427\" DevicePath \"\""
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: E0717 01:11:22.789083    1101 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 01:11:22 test-preload-625427 kubelet[1101]: E0717 01:11:22.789137    1101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume podName:9a204f23-4455-47b4-9909-6ae9298090c1 nodeName:}" failed. No retries permitted until 2024-07-17 01:11:23.789122713 +0000 UTC m=+7.000594840 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume") pod "coredns-6d4b75cb6d-mcfz8" (UID: "9a204f23-4455-47b4-9909-6ae9298090c1") : object "kube-system"/"coredns" not registered
	Jul 17 01:11:23 test-preload-625427 kubelet[1101]: E0717 01:11:23.797611    1101 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 01:11:23 test-preload-625427 kubelet[1101]: E0717 01:11:23.797695    1101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume podName:9a204f23-4455-47b4-9909-6ae9298090c1 nodeName:}" failed. No retries permitted until 2024-07-17 01:11:25.797680409 +0000 UTC m=+9.009152521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume") pod "coredns-6d4b75cb6d-mcfz8" (UID: "9a204f23-4455-47b4-9909-6ae9298090c1") : object "kube-system"/"coredns" not registered
	Jul 17 01:11:24 test-preload-625427 kubelet[1101]: E0717 01:11:24.022808    1101 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-mcfz8" podUID=9a204f23-4455-47b4-9909-6ae9298090c1
	Jul 17 01:11:25 test-preload-625427 kubelet[1101]: I0717 01:11:25.033121    1101 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=869a3cae-cbf9-47a1-842f-b6e2f3656185 path="/var/lib/kubelet/pods/869a3cae-cbf9-47a1-842f-b6e2f3656185/volumes"
	Jul 17 01:11:25 test-preload-625427 kubelet[1101]: E0717 01:11:25.811144    1101 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 01:11:25 test-preload-625427 kubelet[1101]: E0717 01:11:25.811310    1101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume podName:9a204f23-4455-47b4-9909-6ae9298090c1 nodeName:}" failed. No retries permitted until 2024-07-17 01:11:29.811290673 +0000 UTC m=+13.022762784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9a204f23-4455-47b4-9909-6ae9298090c1-config-volume") pod "coredns-6d4b75cb6d-mcfz8" (UID: "9a204f23-4455-47b4-9909-6ae9298090c1") : object "kube-system"/"coredns" not registered
	Jul 17 01:11:26 test-preload-625427 kubelet[1101]: E0717 01:11:26.022921    1101 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-mcfz8" podUID=9a204f23-4455-47b4-9909-6ae9298090c1
	
	
	==> storage-provisioner [61dc817a7ee8e811f922b613aa5a94da7fde97e9cd76f9f983270ede28e3fe03] <==
	I0717 01:11:23.085631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-625427 -n test-preload-625427
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-625427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-625427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-625427
--- FAIL: TestPreload (163.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (414.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m27.340667309s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-729236] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-729236" primary control-plane node in "kubernetes-upgrade-729236" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:13:31.854340   55685 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:13:31.854700   55685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:13:31.854716   55685 out.go:304] Setting ErrFile to fd 2...
	I0717 01:13:31.854723   55685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:13:31.855074   55685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:13:31.856033   55685 out.go:298] Setting JSON to false
	I0717 01:13:31.857008   55685 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6961,"bootTime":1721171851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:13:31.857068   55685 start.go:139] virtualization: kvm guest
	I0717 01:13:31.859439   55685 out.go:177] * [kubernetes-upgrade-729236] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:13:31.861693   55685 notify.go:220] Checking for updates...
	I0717 01:13:31.863118   55685 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:13:31.865549   55685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:13:31.868005   55685 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:13:31.870189   55685 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:13:31.872339   55685 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:13:31.875495   55685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:13:31.877232   55685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:13:31.913702   55685 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:13:31.914779   55685 start.go:297] selected driver: kvm2
	I0717 01:13:31.914803   55685 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:13:31.914816   55685 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:13:31.915749   55685 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:13:31.915841   55685 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:13:31.931130   55685 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:13:31.931174   55685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:13:31.931419   55685 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 01:13:31.931446   55685 cni.go:84] Creating CNI manager for ""
	I0717 01:13:31.931456   55685 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:13:31.931474   55685 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 01:13:31.931587   55685 start.go:340] cluster config:
	{Name:kubernetes-upgrade-729236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-729236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:13:31.931724   55685 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:13:31.933355   55685 out.go:177] * Starting "kubernetes-upgrade-729236" primary control-plane node in "kubernetes-upgrade-729236" cluster
	I0717 01:13:31.934542   55685 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:13:31.934570   55685 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:13:31.934577   55685 cache.go:56] Caching tarball of preloaded images
	I0717 01:13:31.934647   55685 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:13:31.934661   55685 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:13:31.934943   55685 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/config.json ...
	I0717 01:13:31.934966   55685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/config.json: {Name:mke0571a521c7ec98dfb991d55d74c553d32b183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:13:31.935093   55685 start.go:360] acquireMachinesLock for kubernetes-upgrade-729236: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:13:31.935122   55685 start.go:364] duration metric: took 14.594µs to acquireMachinesLock for "kubernetes-upgrade-729236"
	I0717 01:13:31.935136   55685 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-729236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-729236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:13:31.935198   55685 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:13:31.936742   55685 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 01:13:31.936843   55685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:13:31.936881   55685 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:13:31.952204   55685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I0717 01:13:31.952753   55685 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:13:31.953290   55685 main.go:141] libmachine: Using API Version  1
	I0717 01:13:31.953309   55685 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:13:31.953710   55685 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:13:31.953893   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetMachineName
	I0717 01:13:31.954076   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .DriverName
	I0717 01:13:31.954268   55685 start.go:159] libmachine.API.Create for "kubernetes-upgrade-729236" (driver="kvm2")
	I0717 01:13:31.954295   55685 client.go:168] LocalClient.Create starting
	I0717 01:13:31.954335   55685 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 01:13:31.954371   55685 main.go:141] libmachine: Decoding PEM data...
	I0717 01:13:31.954392   55685 main.go:141] libmachine: Parsing certificate...
	I0717 01:13:31.954482   55685 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 01:13:31.954508   55685 main.go:141] libmachine: Decoding PEM data...
	I0717 01:13:31.954528   55685 main.go:141] libmachine: Parsing certificate...
	I0717 01:13:31.954550   55685 main.go:141] libmachine: Running pre-create checks...
	I0717 01:13:31.954568   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .PreCreateCheck
	I0717 01:13:31.954977   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetConfigRaw
	I0717 01:13:31.955326   55685 main.go:141] libmachine: Creating machine...
	I0717 01:13:31.955338   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .Create
	I0717 01:13:31.955464   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Creating KVM machine...
	I0717 01:13:31.956715   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found existing default KVM network
	I0717 01:13:31.957593   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:31.957432   55762 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002131a0}
	I0717 01:13:31.957620   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | created network xml: 
	I0717 01:13:31.957639   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | <network>
	I0717 01:13:31.957657   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |   <name>mk-kubernetes-upgrade-729236</name>
	I0717 01:13:31.957672   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |   <dns enable='no'/>
	I0717 01:13:31.957683   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |   
	I0717 01:13:31.957705   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 01:13:31.957716   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |     <dhcp>
	I0717 01:13:31.957731   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 01:13:31.957741   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |     </dhcp>
	I0717 01:13:31.957767   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |   </ip>
	I0717 01:13:31.957786   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG |   
	I0717 01:13:31.957798   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | </network>
	I0717 01:13:31.957808   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | 
	I0717 01:13:31.962921   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | trying to create private KVM network mk-kubernetes-upgrade-729236 192.168.39.0/24...
	I0717 01:13:32.039174   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | private KVM network mk-kubernetes-upgrade-729236 192.168.39.0/24 created
	I0717 01:13:32.039208   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236 ...
	I0717 01:13:32.039222   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:32.039151   55762 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:13:32.039241   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 01:13:32.039310   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 01:13:32.288165   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:32.288025   55762 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/id_rsa...
	I0717 01:13:32.366426   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:32.366320   55762 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/kubernetes-upgrade-729236.rawdisk...
	I0717 01:13:32.366461   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Writing magic tar header
	I0717 01:13:32.366480   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Writing SSH key tar header
	I0717 01:13:32.366528   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:32.366475   55762 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236 ...
	I0717 01:13:32.366597   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236
	I0717 01:13:32.366620   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 01:13:32.366633   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236 (perms=drwx------)
	I0717 01:13:32.366659   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:13:32.366677   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 01:13:32.366690   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:13:32.366703   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:13:32.366719   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:13:32.366735   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 01:13:32.366749   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 01:13:32.366763   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:13:32.366782   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:13:32.366794   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Checking permissions on dir: /home
	I0717 01:13:32.366806   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Skipping /home - not owner
	I0717 01:13:32.366819   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Creating domain...
	I0717 01:13:32.367693   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) define libvirt domain using xml: 
	I0717 01:13:32.367711   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) <domain type='kvm'>
	I0717 01:13:32.367800   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   <name>kubernetes-upgrade-729236</name>
	I0717 01:13:32.367823   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   <memory unit='MiB'>2200</memory>
	I0717 01:13:32.367831   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   <vcpu>2</vcpu>
	I0717 01:13:32.367836   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   <features>
	I0717 01:13:32.367861   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <acpi/>
	I0717 01:13:32.367880   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <apic/>
	I0717 01:13:32.367890   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <pae/>
	I0717 01:13:32.367900   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     
	I0717 01:13:32.367906   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   </features>
	I0717 01:13:32.367911   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   <cpu mode='host-passthrough'>
	I0717 01:13:32.367918   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   
	I0717 01:13:32.367922   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   </cpu>
	I0717 01:13:32.367929   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   <os>
	I0717 01:13:32.367934   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <type>hvm</type>
	I0717 01:13:32.367940   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <boot dev='cdrom'/>
	I0717 01:13:32.367951   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <boot dev='hd'/>
	I0717 01:13:32.367968   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <bootmenu enable='no'/>
	I0717 01:13:32.367981   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   </os>
	I0717 01:13:32.367993   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   <devices>
	I0717 01:13:32.368002   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <disk type='file' device='cdrom'>
	I0717 01:13:32.368013   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/boot2docker.iso'/>
	I0717 01:13:32.368021   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <target dev='hdc' bus='scsi'/>
	I0717 01:13:32.368036   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <readonly/>
	I0717 01:13:32.368047   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     </disk>
	I0717 01:13:32.368057   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <disk type='file' device='disk'>
	I0717 01:13:32.368071   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:13:32.368092   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/kubernetes-upgrade-729236.rawdisk'/>
	I0717 01:13:32.368104   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <target dev='hda' bus='virtio'/>
	I0717 01:13:32.368114   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     </disk>
	I0717 01:13:32.368123   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <interface type='network'>
	I0717 01:13:32.368136   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <source network='mk-kubernetes-upgrade-729236'/>
	I0717 01:13:32.368145   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <model type='virtio'/>
	I0717 01:13:32.368152   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     </interface>
	I0717 01:13:32.368167   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <interface type='network'>
	I0717 01:13:32.368185   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <source network='default'/>
	I0717 01:13:32.368202   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <model type='virtio'/>
	I0717 01:13:32.368209   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     </interface>
	I0717 01:13:32.368216   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <serial type='pty'>
	I0717 01:13:32.368223   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <target port='0'/>
	I0717 01:13:32.368231   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     </serial>
	I0717 01:13:32.368242   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <console type='pty'>
	I0717 01:13:32.368252   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <target type='serial' port='0'/>
	I0717 01:13:32.368401   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     </console>
	I0717 01:13:32.368432   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     <rng model='virtio'>
	I0717 01:13:32.368450   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)       <backend model='random'>/dev/random</backend>
	I0717 01:13:32.368466   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     </rng>
	I0717 01:13:32.368479   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     
	I0717 01:13:32.368489   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)     
	I0717 01:13:32.368500   55685 main.go:141] libmachine: (kubernetes-upgrade-729236)   </devices>
	I0717 01:13:32.368509   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) </domain>
	I0717 01:13:32.368520   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) 
	I0717 01:13:32.372426   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:2d:64:47 in network default
	I0717 01:13:32.372994   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Ensuring networks are active...
	I0717 01:13:32.373016   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:32.373696   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Ensuring network default is active
	I0717 01:13:32.373975   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Ensuring network mk-kubernetes-upgrade-729236 is active
	I0717 01:13:32.374400   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Getting domain xml...
	I0717 01:13:32.374995   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Creating domain...
	I0717 01:13:33.676053   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Waiting to get IP...
	I0717 01:13:33.676950   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:33.677339   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:33.677366   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:33.677302   55762 retry.go:31] will retry after 233.40049ms: waiting for machine to come up
	I0717 01:13:33.912699   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:33.913114   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:33.913137   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:33.913069   55762 retry.go:31] will retry after 377.934798ms: waiting for machine to come up
	I0717 01:13:34.292478   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:34.293001   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:34.293032   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:34.292875   55762 retry.go:31] will retry after 298.28375ms: waiting for machine to come up
	I0717 01:13:34.592323   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:34.592700   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:34.592723   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:34.592673   55762 retry.go:31] will retry after 380.547606ms: waiting for machine to come up
	I0717 01:13:34.975203   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:34.975688   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:34.975716   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:34.975641   55762 retry.go:31] will retry after 532.154411ms: waiting for machine to come up
	I0717 01:13:35.509322   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:35.509757   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:35.509788   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:35.509711   55762 retry.go:31] will retry after 664.492878ms: waiting for machine to come up
	I0717 01:13:36.175436   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:36.175826   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:36.175859   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:36.175777   55762 retry.go:31] will retry after 719.619475ms: waiting for machine to come up
	I0717 01:13:36.896497   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:36.896897   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:36.896936   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:36.896866   55762 retry.go:31] will retry after 1.39231176s: waiting for machine to come up
	I0717 01:13:38.290319   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:38.290692   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:38.290713   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:38.290652   55762 retry.go:31] will retry after 1.684615931s: waiting for machine to come up
	I0717 01:13:39.976298   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:39.976699   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:39.976735   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:39.976670   55762 retry.go:31] will retry after 1.571934257s: waiting for machine to come up
	I0717 01:13:41.550235   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:41.550715   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:41.550749   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:41.550599   55762 retry.go:31] will retry after 1.871831796s: waiting for machine to come up
	I0717 01:13:43.424516   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:43.425044   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:43.425077   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:43.425007   55762 retry.go:31] will retry after 2.892220262s: waiting for machine to come up
	I0717 01:13:46.318646   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:46.319073   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:46.319102   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:46.319019   55762 retry.go:31] will retry after 3.912755221s: waiting for machine to come up
	I0717 01:13:50.235514   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:50.235867   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find current IP address of domain kubernetes-upgrade-729236 in network mk-kubernetes-upgrade-729236
	I0717 01:13:50.235893   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | I0717 01:13:50.235814   55762 retry.go:31] will retry after 3.495380033s: waiting for machine to come up
	I0717 01:13:53.733201   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:53.733612   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Found IP for machine: 192.168.39.195
	I0717 01:13:53.733644   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has current primary IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:53.733654   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Reserving static IP address...
	I0717 01:13:53.734015   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-729236", mac: "52:54:00:8b:7d:c9", ip: "192.168.39.195"} in network mk-kubernetes-upgrade-729236
	I0717 01:13:53.805561   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Getting to WaitForSSH function...
	I0717 01:13:53.805595   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Reserved static IP address: 192.168.39.195
	I0717 01:13:53.805609   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Waiting for SSH to be available...
	I0717 01:13:53.807948   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:53.808338   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:53.808373   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:53.808465   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Using SSH client type: external
	I0717 01:13:53.808502   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/id_rsa (-rw-------)
	I0717 01:13:53.808526   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:13:53.808539   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | About to run SSH command:
	I0717 01:13:53.808566   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | exit 0
	I0717 01:13:53.936307   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | SSH cmd err, output: <nil>: 
	I0717 01:13:53.936606   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) KVM machine creation complete!
	I0717 01:13:53.936951   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetConfigRaw
	I0717 01:13:53.937477   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .DriverName
	I0717 01:13:53.937687   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .DriverName
	I0717 01:13:53.937846   55685 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:13:53.937875   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetState
	I0717 01:13:53.939060   55685 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:13:53.939074   55685 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:13:53.939082   55685 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:13:53.939090   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:53.942430   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:53.942857   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:53.942887   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:53.943122   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:53.943306   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:53.943517   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:53.943700   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:53.943858   55685 main.go:141] libmachine: Using SSH client type: native
	I0717 01:13:53.944098   55685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0717 01:13:53.944111   55685 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:13:54.051736   55685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:13:54.051756   55685 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:13:54.051764   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:54.054386   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.054791   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:54.054822   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.054994   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:54.055199   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.055337   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.055484   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:54.055657   55685 main.go:141] libmachine: Using SSH client type: native
	I0717 01:13:54.055826   55685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0717 01:13:54.055837   55685 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:13:54.165287   55685 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:13:54.165352   55685 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:13:54.165358   55685 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:13:54.165365   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetMachineName
	I0717 01:13:54.165598   55685 buildroot.go:166] provisioning hostname "kubernetes-upgrade-729236"
	I0717 01:13:54.165622   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetMachineName
	I0717 01:13:54.165783   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:54.168247   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.168622   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:54.168656   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.168773   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:54.168957   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.169113   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.169257   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:54.169447   55685 main.go:141] libmachine: Using SSH client type: native
	I0717 01:13:54.169660   55685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0717 01:13:54.169681   55685 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-729236 && echo "kubernetes-upgrade-729236" | sudo tee /etc/hostname
	I0717 01:13:54.292757   55685 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-729236
	
	I0717 01:13:54.292782   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:54.295511   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.295893   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:54.295919   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.296042   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:54.296237   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.296392   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.296540   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:54.296761   55685 main.go:141] libmachine: Using SSH client type: native
	I0717 01:13:54.296951   55685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0717 01:13:54.296984   55685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-729236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-729236/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-729236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:13:54.413578   55685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:13:54.413602   55685 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:13:54.413630   55685 buildroot.go:174] setting up certificates
	I0717 01:13:54.413639   55685 provision.go:84] configureAuth start
	I0717 01:13:54.413647   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetMachineName
	I0717 01:13:54.413930   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetIP
	I0717 01:13:54.416539   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.416886   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:54.416914   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.417044   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:54.419221   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.419498   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:54.419526   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.419635   55685 provision.go:143] copyHostCerts
	I0717 01:13:54.419698   55685 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:13:54.419707   55685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:13:54.419775   55685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:13:54.419887   55685 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:13:54.419896   55685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:13:54.419922   55685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:13:54.419995   55685 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:13:54.420002   55685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:13:54.420023   55685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:13:54.420066   55685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-729236 san=[127.0.0.1 192.168.39.195 kubernetes-upgrade-729236 localhost minikube]
	I0717 01:13:54.799049   55685 provision.go:177] copyRemoteCerts
	I0717 01:13:54.799107   55685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:13:54.799131   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:54.801684   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.802078   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:54.802111   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.802313   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:54.802527   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.802777   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:54.802935   55685 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/id_rsa Username:docker}
	I0717 01:13:54.886533   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 01:13:54.912979   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:13:54.938671   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:13:54.963157   55685 provision.go:87] duration metric: took 549.508202ms to configureAuth
	I0717 01:13:54.963182   55685 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:13:54.963375   55685 config.go:182] Loaded profile config "kubernetes-upgrade-729236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:13:54.963453   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:54.965809   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.966135   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:54.966165   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:54.966294   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:54.966481   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.966642   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:54.966752   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:54.966863   55685 main.go:141] libmachine: Using SSH client type: native
	I0717 01:13:54.967015   55685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0717 01:13:54.967030   55685 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:13:55.224925   55685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:13:55.224952   55685 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:13:55.224962   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetURL
	I0717 01:13:55.226222   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | Using libvirt version 6000000
	I0717 01:13:55.228329   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.228699   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:55.228727   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.228838   55685 main.go:141] libmachine: Docker is up and running!
	I0717 01:13:55.228851   55685 main.go:141] libmachine: Reticulating splines...
	I0717 01:13:55.228857   55685 client.go:171] duration metric: took 23.274552515s to LocalClient.Create
	I0717 01:13:55.228874   55685 start.go:167] duration metric: took 23.274608309s to libmachine.API.Create "kubernetes-upgrade-729236"
	I0717 01:13:55.228883   55685 start.go:293] postStartSetup for "kubernetes-upgrade-729236" (driver="kvm2")
	I0717 01:13:55.228892   55685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:13:55.228909   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .DriverName
	I0717 01:13:55.229121   55685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:13:55.229150   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:55.231149   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.231389   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:55.231409   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.231588   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:55.231739   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:55.231913   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:55.232058   55685 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/id_rsa Username:docker}
	I0717 01:13:55.314564   55685 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:13:55.318895   55685 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:13:55.318917   55685 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:13:55.318993   55685 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:13:55.319102   55685 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:13:55.319219   55685 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:13:55.328545   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:13:55.352922   55685 start.go:296] duration metric: took 124.028018ms for postStartSetup
	I0717 01:13:55.352992   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetConfigRaw
	I0717 01:13:55.353726   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetIP
	I0717 01:13:55.356436   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.356817   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:55.356863   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.357022   55685 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/config.json ...
	I0717 01:13:55.357242   55685 start.go:128] duration metric: took 23.422034854s to createHost
	I0717 01:13:55.357270   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:55.359554   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.360004   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:55.360039   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.360118   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:55.360314   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:55.360468   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:55.360613   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:55.360783   55685 main.go:141] libmachine: Using SSH client type: native
	I0717 01:13:55.360982   55685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0717 01:13:55.361001   55685 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 01:13:55.468993   55685 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721178835.431940479
	
	I0717 01:13:55.469015   55685 fix.go:216] guest clock: 1721178835.431940479
	I0717 01:13:55.469025   55685 fix.go:229] Guest: 2024-07-17 01:13:55.431940479 +0000 UTC Remote: 2024-07-17 01:13:55.357256891 +0000 UTC m=+23.547938899 (delta=74.683588ms)
	I0717 01:13:55.469060   55685 fix.go:200] guest clock delta is within tolerance: 74.683588ms
	I0717 01:13:55.469067   55685 start.go:83] releasing machines lock for "kubernetes-upgrade-729236", held for 23.533938824s
	I0717 01:13:55.469089   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .DriverName
	I0717 01:13:55.469374   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetIP
	I0717 01:13:55.472305   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.472679   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:55.472706   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.472881   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .DriverName
	I0717 01:13:55.473437   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .DriverName
	I0717 01:13:55.473667   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .DriverName
	I0717 01:13:55.473754   55685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:13:55.473799   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:55.473917   55685 ssh_runner.go:195] Run: cat /version.json
	I0717 01:13:55.473945   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHHostname
	I0717 01:13:55.476574   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.476847   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.476914   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:55.476938   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.477220   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:55.477230   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:55.477257   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:55.477342   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHPort
	I0717 01:13:55.477426   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:55.477495   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHKeyPath
	I0717 01:13:55.477562   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:55.477614   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetSSHUsername
	I0717 01:13:55.477723   55685 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/id_rsa Username:docker}
	I0717 01:13:55.477795   55685 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/kubernetes-upgrade-729236/id_rsa Username:docker}
	I0717 01:13:55.577965   55685 ssh_runner.go:195] Run: systemctl --version
	I0717 01:13:55.585204   55685 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:13:55.755497   55685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:13:55.761786   55685 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:13:55.761858   55685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:13:55.777819   55685 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:13:55.777852   55685 start.go:495] detecting cgroup driver to use...
	I0717 01:13:55.777918   55685 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:13:55.794994   55685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:13:55.808484   55685 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:13:55.808546   55685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:13:55.824605   55685 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:13:55.840308   55685 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:13:55.961169   55685 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:13:56.109987   55685 docker.go:233] disabling docker service ...
	I0717 01:13:56.110055   55685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:13:56.124229   55685 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:13:56.136866   55685 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:13:56.285781   55685 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:13:56.410950   55685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:13:56.425112   55685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:13:56.443645   55685 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:13:56.443716   55685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:13:56.454461   55685 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:13:56.454539   55685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:13:56.466590   55685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:13:56.477456   55685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:13:56.487874   55685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:13:56.498847   55685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:13:56.509477   55685 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:13:56.509537   55685 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:13:56.526215   55685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:13:56.541394   55685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:13:56.665488   55685 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:13:56.812162   55685 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:13:56.812243   55685 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:13:56.818225   55685 start.go:563] Will wait 60s for crictl version
	I0717 01:13:56.818287   55685 ssh_runner.go:195] Run: which crictl
	I0717 01:13:56.822595   55685 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:13:56.868076   55685 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:13:56.868170   55685 ssh_runner.go:195] Run: crio --version
	I0717 01:13:56.899304   55685 ssh_runner.go:195] Run: crio --version
	I0717 01:13:56.932130   55685 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:13:56.933411   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) Calling .GetIP
	I0717 01:13:56.936383   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:56.936830   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:7d:c9", ip: ""} in network mk-kubernetes-upgrade-729236: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:46 +0000 UTC Type:0 Mac:52:54:00:8b:7d:c9 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:kubernetes-upgrade-729236 Clientid:01:52:54:00:8b:7d:c9}
	I0717 01:13:56.936858   55685 main.go:141] libmachine: (kubernetes-upgrade-729236) DBG | domain kubernetes-upgrade-729236 has defined IP address 192.168.39.195 and MAC address 52:54:00:8b:7d:c9 in network mk-kubernetes-upgrade-729236
	I0717 01:13:56.937109   55685 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:13:56.941765   55685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:13:56.955737   55685 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-729236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-729236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:13:56.955833   55685 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:13:56.955896   55685 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:13:56.996088   55685 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:13:56.996165   55685 ssh_runner.go:195] Run: which lz4
	I0717 01:13:57.000121   55685 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 01:13:57.004294   55685 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:13:57.004329   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:13:58.690503   55685 crio.go:462] duration metric: took 1.690404379s to copy over tarball
	I0717 01:13:58.690583   55685 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:14:01.221271   55685 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.530653998s)
	I0717 01:14:01.221298   55685 crio.go:469] duration metric: took 2.530761701s to extract the tarball
	I0717 01:14:01.221305   55685 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:14:01.264005   55685 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:14:01.309966   55685 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:14:01.309994   55685 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:14:01.310058   55685 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:14:01.310086   55685 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:01.310095   55685 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:01.310077   55685 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:01.310124   55685 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:01.310150   55685 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:14:01.310163   55685 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:01.310133   55685 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:14:01.311431   55685 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:01.311464   55685 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:14:01.311472   55685 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:01.311483   55685 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:01.311481   55685 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:14:01.311437   55685 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:01.311553   55685 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:01.311596   55685 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:14:01.471993   55685 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:01.480700   55685 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:01.496138   55685 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:01.496428   55685 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:14:01.497050   55685 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:01.503422   55685 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:01.533548   55685 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:14:01.533597   55685 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:01.533649   55685 ssh_runner.go:195] Run: which crictl
	I0717 01:14:01.573476   55685 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:14:01.584918   55685 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:14:01.584962   55685 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:01.585006   55685 ssh_runner.go:195] Run: which crictl
	I0717 01:14:01.605521   55685 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:14:01.648528   55685 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:14:01.648586   55685 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:01.648635   55685 ssh_runner.go:195] Run: which crictl
	I0717 01:14:01.654682   55685 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:14:01.654731   55685 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:01.654736   55685 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:14:01.654764   55685 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:14:01.654782   55685 ssh_runner.go:195] Run: which crictl
	I0717 01:14:01.654805   55685 ssh_runner.go:195] Run: which crictl
	I0717 01:14:01.664125   55685 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:14:01.664165   55685 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:01.664206   55685 ssh_runner.go:195] Run: which crictl
	I0717 01:14:01.664253   55685 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:01.698642   55685 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:14:01.698675   55685 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:14:01.698719   55685 ssh_runner.go:195] Run: which crictl
	I0717 01:14:01.698731   55685 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:01.833752   55685 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:01.833823   55685 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:01.833843   55685 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:14:01.833917   55685 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:01.833951   55685 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:14:01.833921   55685 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:14:01.833991   55685 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:14:01.936786   55685 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:14:01.936807   55685 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:14:01.936821   55685 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:14:01.936923   55685 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:14:01.936959   55685 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:14:01.937001   55685 cache_images.go:92] duration metric: took 626.99164ms to LoadCachedImages
	W0717 01:14:01.937075   55685 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0717 01:14:01.937090   55685 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.20.0 crio true true} ...
	I0717 01:14:01.937224   55685 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-729236 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-729236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:14:01.937299   55685 ssh_runner.go:195] Run: crio config
	I0717 01:14:01.989721   55685 cni.go:84] Creating CNI manager for ""
	I0717 01:14:01.989743   55685 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:14:01.989751   55685 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:14:01.989769   55685 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-729236 NodeName:kubernetes-upgrade-729236 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:14:01.989952   55685 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-729236"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:14:01.990031   55685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:14:02.002368   55685 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:14:02.002453   55685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:14:02.013043   55685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0717 01:14:02.032114   55685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:14:02.050909   55685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0717 01:14:02.071461   55685 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0717 01:14:02.076683   55685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:14:02.090252   55685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:14:02.224953   55685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:14:02.243428   55685 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236 for IP: 192.168.39.195
	I0717 01:14:02.243453   55685 certs.go:194] generating shared ca certs ...
	I0717 01:14:02.243473   55685 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:02.243647   55685 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:14:02.243697   55685 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:14:02.243710   55685 certs.go:256] generating profile certs ...
	I0717 01:14:02.243775   55685 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/client.key
	I0717 01:14:02.243791   55685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/client.crt with IP's: []
	I0717 01:14:02.337819   55685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/client.crt ...
	I0717 01:14:02.337855   55685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/client.crt: {Name:mk4c20b4d824fcbaec4780083f17c559b7f1c4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:02.338092   55685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/client.key ...
	I0717 01:14:02.338116   55685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/client.key: {Name:mkb3359b502b60dca61e81354a1e4975de0fce16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:02.338239   55685 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.key.facb88c7
	I0717 01:14:02.338279   55685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.crt.facb88c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I0717 01:14:02.431565   55685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.crt.facb88c7 ...
	I0717 01:14:02.431592   55685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.crt.facb88c7: {Name:mke871ad264d2c3db049b53d5c5dd273276fdf79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:02.431751   55685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.key.facb88c7 ...
	I0717 01:14:02.431765   55685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.key.facb88c7: {Name:mk2679e5865a7084f364f4cd3f58abe909d28274 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:02.431861   55685 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.crt.facb88c7 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.crt
	I0717 01:14:02.431974   55685 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.key.facb88c7 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.key
	I0717 01:14:02.432051   55685 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.key
	I0717 01:14:02.432073   55685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.crt with IP's: []
	I0717 01:14:02.596752   55685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.crt ...
	I0717 01:14:02.596781   55685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.crt: {Name:mkc0a4b0e4c4d7181cf00ab295a991da06e19db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:02.596968   55685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.key ...
	I0717 01:14:02.596994   55685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.key: {Name:mk15e173587bcd6617d744b1682b8991e849855c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:02.597210   55685 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:14:02.597250   55685 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:14:02.597265   55685 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:14:02.597298   55685 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:14:02.597325   55685 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:14:02.597354   55685 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:14:02.597404   55685 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:14:02.598711   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:14:02.627139   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:14:02.652478   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:14:02.677518   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:14:02.704221   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 01:14:02.729992   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:14:02.753973   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:14:02.779325   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:14:02.806244   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:14:02.833581   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:14:02.859590   55685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:14:02.886748   55685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:14:02.906125   55685 ssh_runner.go:195] Run: openssl version
	I0717 01:14:02.912256   55685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:14:02.925250   55685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:14:02.930111   55685 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:14:02.930172   55685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:14:02.936988   55685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:14:02.949794   55685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:14:02.962921   55685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:14:02.969124   55685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:14:02.969177   55685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:14:02.977131   55685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:14:02.988685   55685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:14:03.000183   55685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:14:03.004774   55685 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:14:03.004820   55685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:14:03.010738   55685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:14:03.022064   55685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:14:03.026604   55685 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:14:03.026660   55685 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-729236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-729236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:14:03.026750   55685 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:14:03.026802   55685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:14:03.088072   55685 cri.go:89] found id: ""
	I0717 01:14:03.088155   55685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:14:03.103274   55685 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:14:03.122430   55685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:14:03.138590   55685 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:14:03.138614   55685 kubeadm.go:157] found existing configuration files:
	
	I0717 01:14:03.138664   55685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:14:03.153799   55685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:14:03.153862   55685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:14:03.168838   55685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:14:03.182380   55685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:14:03.182450   55685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:14:03.192965   55685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:14:03.202728   55685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:14:03.202786   55685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:14:03.213042   55685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:14:03.222984   55685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:14:03.223054   55685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:14:03.233519   55685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:14:03.357967   55685 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:14:03.358122   55685 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:14:03.499914   55685 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:14:03.500095   55685 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:14:03.500262   55685 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:14:03.702450   55685 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:14:03.749613   55685 out.go:204]   - Generating certificates and keys ...
	I0717 01:14:03.749761   55685 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:14:03.749860   55685 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:14:03.982522   55685 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:14:04.158165   55685 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:14:04.528496   55685 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:14:04.625017   55685 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:14:04.889242   55685 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:14:04.889426   55685 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-729236 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0717 01:14:05.028535   55685 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:14:05.028717   55685 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-729236 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0717 01:14:05.159389   55685 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:14:05.277804   55685 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:14:05.384940   55685 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:14:05.385143   55685 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:14:05.485066   55685 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:14:05.561041   55685 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:14:05.682575   55685 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:14:05.784693   55685 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:14:05.807909   55685 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:14:05.808056   55685 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:14:05.808134   55685 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:14:05.956584   55685 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:14:05.991401   55685 out.go:204]   - Booting up control plane ...
	I0717 01:14:05.991570   55685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:14:05.991670   55685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:14:05.991771   55685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:14:05.991901   55685 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:14:05.992137   55685 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:14:45.956254   55685 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:14:45.956961   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:14:45.957177   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:14:50.957109   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:14:50.957413   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:15:00.957086   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:15:00.957351   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:15:20.957890   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:15:20.958171   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:16:00.960105   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:16:00.960310   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:16:00.960323   55685 kubeadm.go:310] 
	I0717 01:16:00.960368   55685 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:16:00.960534   55685 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:16:00.960570   55685 kubeadm.go:310] 
	I0717 01:16:00.960618   55685 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:16:00.960674   55685 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:16:00.960833   55685 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:16:00.960845   55685 kubeadm.go:310] 
	I0717 01:16:00.960994   55685 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:16:00.961032   55685 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:16:00.961083   55685 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:16:00.961094   55685 kubeadm.go:310] 
	I0717 01:16:00.961257   55685 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:16:00.961392   55685 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:16:00.961412   55685 kubeadm.go:310] 
	I0717 01:16:00.961551   55685 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:16:00.961700   55685 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:16:00.961818   55685 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:16:00.961926   55685 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:16:00.961936   55685 kubeadm.go:310] 
	I0717 01:16:00.962996   55685 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:16:00.963117   55685 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:16:00.963215   55685 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0717 01:16:00.963337   55685 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-729236 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-729236 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-729236 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-729236 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 01:16:00.963393   55685 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:16:02.018354   55685 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.054933977s)
	I0717 01:16:02.018426   55685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:16:02.038157   55685 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:16:02.050950   55685 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:16:02.050980   55685 kubeadm.go:157] found existing configuration files:
	
	I0717 01:16:02.051031   55685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:16:02.062991   55685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:16:02.063063   55685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:16:02.075454   55685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:16:02.086658   55685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:16:02.086738   55685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:16:02.101883   55685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:16:02.116071   55685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:16:02.116146   55685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:16:02.130874   55685 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:16:02.142363   55685 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:16:02.142434   55685 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:16:02.156197   55685 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:16:02.248444   55685 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:16:02.248628   55685 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:16:02.426759   55685 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:16:02.426929   55685 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:16:02.427040   55685 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:16:02.660039   55685 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:16:02.662063   55685 out.go:204]   - Generating certificates and keys ...
	I0717 01:16:02.662189   55685 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:16:02.662277   55685 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:16:02.662372   55685 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:16:02.662462   55685 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:16:02.662583   55685 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:16:02.662673   55685 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:16:02.662759   55685 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:16:02.662842   55685 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:16:02.662963   55685 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:16:02.663097   55685 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:16:02.663154   55685 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:16:02.663240   55685 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:16:02.807824   55685 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:16:02.927480   55685 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:16:03.080049   55685 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:16:03.346702   55685 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:16:03.364145   55685 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:16:03.365738   55685 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:16:03.365782   55685 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:16:03.543692   55685 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:16:03.545491   55685 out.go:204]   - Booting up control plane ...
	I0717 01:16:03.545607   55685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:16:03.546373   55685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:16:03.549403   55685 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:16:03.551775   55685 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:16:03.558106   55685 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:16:43.555703   55685 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:16:43.556354   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:16:43.556648   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:16:48.556665   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:16:48.556837   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:16:58.557037   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:16:58.557300   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:17:18.557965   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:17:18.558233   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:17:58.560646   55685 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:17:58.560877   55685 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:17:58.560889   55685 kubeadm.go:310] 
	I0717 01:17:58.560951   55685 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:17:58.561025   55685 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:17:58.561041   55685 kubeadm.go:310] 
	I0717 01:17:58.561069   55685 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:17:58.561113   55685 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:17:58.561245   55685 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:17:58.561260   55685 kubeadm.go:310] 
	I0717 01:17:58.561398   55685 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:17:58.561479   55685 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:17:58.561536   55685 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:17:58.561545   55685 kubeadm.go:310] 
	I0717 01:17:58.561675   55685 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:17:58.561785   55685 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:17:58.561796   55685 kubeadm.go:310] 
	I0717 01:17:58.561948   55685 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:17:58.562071   55685 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:17:58.562176   55685 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:17:58.562268   55685 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:17:58.562279   55685 kubeadm.go:310] 
	I0717 01:17:58.562749   55685 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:17:58.562823   55685 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:17:58.562877   55685 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 01:17:58.562938   55685 kubeadm.go:394] duration metric: took 3m55.536283822s to StartCluster
	I0717 01:17:58.562979   55685 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:17:58.563046   55685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:17:58.605231   55685 cri.go:89] found id: ""
	I0717 01:17:58.605262   55685 logs.go:276] 0 containers: []
	W0717 01:17:58.605274   55685 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:17:58.605282   55685 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:17:58.605348   55685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:17:58.640298   55685 cri.go:89] found id: ""
	I0717 01:17:58.640333   55685 logs.go:276] 0 containers: []
	W0717 01:17:58.640342   55685 logs.go:278] No container was found matching "etcd"
	I0717 01:17:58.640348   55685 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:17:58.640400   55685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:17:58.673247   55685 cri.go:89] found id: ""
	I0717 01:17:58.673272   55685 logs.go:276] 0 containers: []
	W0717 01:17:58.673282   55685 logs.go:278] No container was found matching "coredns"
	I0717 01:17:58.673288   55685 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:17:58.673339   55685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:17:58.723063   55685 cri.go:89] found id: ""
	I0717 01:17:58.723097   55685 logs.go:276] 0 containers: []
	W0717 01:17:58.723120   55685 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:17:58.723128   55685 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:17:58.723188   55685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:17:58.757596   55685 cri.go:89] found id: ""
	I0717 01:17:58.757620   55685 logs.go:276] 0 containers: []
	W0717 01:17:58.757627   55685 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:17:58.757633   55685 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:17:58.757680   55685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:17:58.791107   55685 cri.go:89] found id: ""
	I0717 01:17:58.791135   55685 logs.go:276] 0 containers: []
	W0717 01:17:58.791144   55685 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:17:58.791153   55685 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:17:58.791217   55685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:17:58.824907   55685 cri.go:89] found id: ""
	I0717 01:17:58.824936   55685 logs.go:276] 0 containers: []
	W0717 01:17:58.824946   55685 logs.go:278] No container was found matching "kindnet"
	I0717 01:17:58.824957   55685 logs.go:123] Gathering logs for kubelet ...
	I0717 01:17:58.824970   55685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:17:58.877689   55685 logs.go:123] Gathering logs for dmesg ...
	I0717 01:17:58.877735   55685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:17:58.891849   55685 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:17:58.891880   55685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:17:58.995061   55685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:17:58.995090   55685 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:17:58.995102   55685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:17:59.095761   55685 logs.go:123] Gathering logs for container status ...
	I0717 01:17:59.095800   55685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 01:17:59.134420   55685 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 01:17:59.134463   55685 out.go:239] * 
	* 
	W0717 01:17:59.134523   55685 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:17:59.134623   55685 out.go:239] * 
	* 
	W0717 01:17:59.135430   55685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:17:59.138485   55685 out.go:177] 
	W0717 01:17:59.139741   55685 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:17:59.139795   55685 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 01:17:59.139813   55685 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 01:17:59.141096   55685 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-729236
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-729236: (6.32816172s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-729236 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-729236 status --format={{.Host}}: exit status 7 (68.489045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.611548378s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-729236 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.914936ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-729236] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-729236
	    minikube start -p kubernetes-upgrade-729236 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7292362 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-729236 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0717 01:19:18.738620   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-729236 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.77658058s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-17 01:20:21.125853802 +0000 UTC m=+4551.929999624
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-729236 -n kubernetes-upgrade-729236
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-729236 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-729236 logs -n 25: (3.382168467s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-581130                                       | pause-581130                 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | --alsologtostderr -v=5                                |                              |         |         |                     |                     |
	| delete  | -p pause-581130                                       | pause-581130                 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	| start   | -p force-systemd-flag-804874                          | force-systemd-flag-804874    | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:17 UTC |
	|         | --memory=2048 --force-systemd                         |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-938456 sudo                           | NoKubernetes-938456          | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC |                     |
	|         | systemctl is-active --quiet                           |                              |         |         |                     |                     |
	|         | service kubelet                                       |                              |         |         |                     |                     |
	| stop    | -p NoKubernetes-938456                                | NoKubernetes-938456          | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	| start   | -p NoKubernetes-938456                                | NoKubernetes-938456          | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:17 UTC |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| ssh     | force-systemd-flag-804874 ssh cat                     | force-systemd-flag-804874    | jenkins | v1.33.1 | 17 Jul 24 01:17 UTC | 17 Jul 24 01:17 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                              |         |         |                     |                     |
	| delete  | -p force-systemd-flag-804874                          | force-systemd-flag-804874    | jenkins | v1.33.1 | 17 Jul 24 01:17 UTC | 17 Jul 24 01:17 UTC |
	| ssh     | -p NoKubernetes-938456 sudo                           | NoKubernetes-938456          | jenkins | v1.33.1 | 17 Jul 24 01:17 UTC |                     |
	|         | systemctl is-active --quiet                           |                              |         |         |                     |                     |
	|         | service kubelet                                       |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-938456                                | NoKubernetes-938456          | jenkins | v1.33.1 | 17 Jul 24 01:17 UTC | 17 Jul 24 01:17 UTC |
	| start   | -p force-systemd-env-820894                           | force-systemd-env-820894     | jenkins | v1.33.1 | 17 Jul 24 01:17 UTC | 17 Jul 24 01:18 UTC |
	|         | --memory=2048                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| delete  | -p                                                    | disable-driver-mounts-323595 | jenkins | v1.33.1 | 17 Jul 24 01:17 UTC | 17 Jul 24 01:17 UTC |
	|         | disable-driver-mounts-323595                          |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-621535                             | minikube                     | jenkins | v1.26.0 | 17 Jul 24 01:17 UTC | 17 Jul 24 01:18 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-729236                          | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:17 UTC | 17 Jul 24 01:18 UTC |
	| start   | -p kubernetes-upgrade-729236                          | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:18 UTC | 17 Jul 24 01:19 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-820894                           | force-systemd-env-820894     | jenkins | v1.33.1 | 17 Jul 24 01:18 UTC | 17 Jul 24 01:18 UTC |
	| start   | -p running-upgrade-261470                             | minikube                     | jenkins | v1.26.0 | 17 Jul 24 01:18 UTC | 17 Jul 24 01:19 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	| stop    | stopped-upgrade-621535 stop                           | minikube                     | jenkins | v1.26.0 | 17 Jul 24 01:18 UTC | 17 Jul 24 01:19 UTC |
	| addons  | enable metrics-server -p old-k8s-version-249342       | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-621535                             | stopped-upgrade-621535       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:19 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-729236                          | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-729236                          | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| start   | -p running-upgrade-261470                             | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-621535                             | stopped-upgrade-621535       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:19 UTC |
	| start   | -p embed-certs-484167                                 | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                          |                              |         |         |                     |                     |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:19:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:19:53.287888   63868 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:19:53.288010   63868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:19:53.288018   63868 out.go:304] Setting ErrFile to fd 2...
	I0717 01:19:53.288026   63868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:19:53.288224   63868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:19:53.288804   63868 out.go:298] Setting JSON to false
	I0717 01:19:53.289703   63868 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7342,"bootTime":1721171851,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:19:53.289759   63868 start.go:139] virtualization: kvm guest
	I0717 01:19:53.291892   63868 out.go:177] * [embed-certs-484167] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:19:53.293526   63868 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:19:53.293535   63868 notify.go:220] Checking for updates...
	I0717 01:19:53.296063   63868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:19:53.297369   63868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:19:53.298608   63868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:19:53.299700   63868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:19:53.301023   63868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:19:53.302607   63868 config.go:182] Loaded profile config "kubernetes-upgrade-729236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:19:53.302702   63868 config.go:182] Loaded profile config "old-k8s-version-249342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:19:53.302778   63868 config.go:182] Loaded profile config "running-upgrade-261470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0717 01:19:53.302858   63868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:19:53.340793   63868 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:19:53.342129   63868 start.go:297] selected driver: kvm2
	I0717 01:19:53.342157   63868 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:19:53.342179   63868 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:19:53.343384   63868 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:19:53.343492   63868 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:19:53.359848   63868 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:19:53.359892   63868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:19:53.360093   63868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:19:53.360149   63868 cni.go:84] Creating CNI manager for ""
	I0717 01:19:53.360159   63868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:19:53.360170   63868 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 01:19:53.360226   63868 start.go:340] cluster config:
	{Name:embed-certs-484167 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-484167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:19:53.360315   63868 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:19:53.362120   63868 out.go:177] * Starting "embed-certs-484167" primary control-plane node in "embed-certs-484167" cluster
	I0717 01:19:53.363336   63868 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:19:53.363367   63868 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:19:53.363374   63868 cache.go:56] Caching tarball of preloaded images
	I0717 01:19:53.363461   63868 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:19:53.363475   63868 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:19:53.363588   63868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/config.json ...
	I0717 01:19:53.363612   63868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/config.json: {Name:mkb1c3ac0f563732c16b768b81cb35aab7f5f1e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:19:53.363731   63868 start.go:360] acquireMachinesLock for embed-certs-484167: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:19:53.363764   63868 start.go:364] duration metric: took 21.602µs to acquireMachinesLock for "embed-certs-484167"
	I0717 01:19:53.363778   63868 start.go:93] Provisioning new machine with config: &{Name:embed-certs-484167 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:embed-certs-484167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:19:53.363836   63868 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:19:49.624163   63629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0717 01:19:49.624204   63629 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0717 01:19:49.624247   63629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0717 01:19:52.100569   63629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.476283094s)
	I0717 01:19:52.100603   63629 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0717 01:19:52.100649   63629 cache_images.go:92] duration metric: took 4.103979751s to LoadCachedImages
	W0717 01:19:52.100738   63629 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0717 01:19:52.100754   63629 kubeadm.go:934] updating node { 192.168.50.235 8443 v1.24.1 crio true true} ...
	I0717 01:19:52.100883   63629 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=running-upgrade-261470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-261470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:19:52.100965   63629 ssh_runner.go:195] Run: crio config
	I0717 01:19:52.156438   63629 cni.go:84] Creating CNI manager for ""
	I0717 01:19:52.156460   63629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:19:52.156471   63629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:19:52.156493   63629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.235 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-261470 NodeName:running-upgrade-261470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:19:52.156699   63629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-261470"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:19:52.156770   63629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0717 01:19:52.168635   63629 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:19:52.168705   63629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:19:52.176027   63629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0717 01:19:52.192604   63629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:19:52.209054   63629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0717 01:19:52.226830   63629 ssh_runner.go:195] Run: grep 192.168.50.235	control-plane.minikube.internal$ /etc/hosts
	I0717 01:19:52.231066   63629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:19:52.385493   63629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:19:52.399869   63629 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470 for IP: 192.168.50.235
	I0717 01:19:52.399890   63629 certs.go:194] generating shared ca certs ...
	I0717 01:19:52.399914   63629 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:19:52.400065   63629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:19:52.400132   63629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:19:52.400146   63629 certs.go:256] generating profile certs ...
	I0717 01:19:52.400244   63629 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/client.key
	I0717 01:19:52.400276   63629 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.key.2405e702
	I0717 01:19:52.400292   63629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.crt.2405e702 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.235]
	I0717 01:19:52.478900   63629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.crt.2405e702 ...
	I0717 01:19:52.478928   63629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.crt.2405e702: {Name:mk604e1fdf0a31c4cc2d526539b129d0ecfa318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:19:52.479118   63629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.key.2405e702 ...
	I0717 01:19:52.479138   63629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.key.2405e702: {Name:mke7dec279538d0cf3c8d89fccf9788b28c5231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:19:52.479245   63629 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.crt.2405e702 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.crt
	I0717 01:19:52.479416   63629 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.key.2405e702 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.key
	I0717 01:19:52.479603   63629 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/proxy-client.key
	I0717 01:19:52.479724   63629 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:19:52.479768   63629 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:19:52.479783   63629 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:19:52.479820   63629 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:19:52.479857   63629 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:19:52.479892   63629 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:19:52.479952   63629 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:19:52.480496   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:19:52.511366   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:19:52.535794   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:19:52.562102   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:19:52.585642   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:19:52.614974   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:19:52.640167   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:19:52.666567   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:19:52.687947   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:19:52.711502   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:19:52.755292   63629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:19:52.779781   63629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:19:52.799494   63629 ssh_runner.go:195] Run: openssl version
	I0717 01:19:52.815749   63629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:19:52.826540   63629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:19:52.832364   63629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:19:52.832424   63629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:19:52.839070   63629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:19:52.848400   63629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:19:52.858412   63629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:19:52.863735   63629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:19:52.863793   63629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:19:52.870525   63629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:19:52.881092   63629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:19:52.894617   63629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:19:52.900120   63629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:19:52.900180   63629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:19:52.906769   63629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:19:52.917737   63629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:19:52.924042   63629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:19:52.929971   63629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:19:52.937627   63629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:19:52.945026   63629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:19:52.951091   63629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:19:52.957809   63629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:19:52.963862   63629 kubeadm.go:392] StartCluster: {Name:running-upgrade-261470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running
-upgrade-261470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.235 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 01:19:52.963957   63629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:19:52.964017   63629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:19:52.994576   63629 cri.go:89] found id: "2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e"
	I0717 01:19:52.994599   63629 cri.go:89] found id: "72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e"
	I0717 01:19:52.994604   63629 cri.go:89] found id: "9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8"
	I0717 01:19:52.994608   63629 cri.go:89] found id: "6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9fa3615"
	I0717 01:19:52.994612   63629 cri.go:89] found id: ""
	I0717 01:19:52.994660   63629 ssh_runner.go:195] Run: sudo runc list -f json
	I0717 01:19:53.022152   63629 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e","pid":1098,"status":"running","bundle":"/run/containers/storage/overlay-containers/2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e/userdata","rootfs":"/var/lib/containers/storage/overlay/c1fab362fceb821cb61a9c6db1d8d7f02775c8b72358f75c4422f8b6bab43fe7/merged","created":"2024-07-17T01:19:21.830461664Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"eff52b7d","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"eff52b7d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-07-17T01:19:21.66547989Z","io.kubernetes.cri-o.Image":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.24.1","io.kubernetes.cri-o.ImageRef":"18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-261470\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0178b2af504bd1e7e48a8b83b3fc7d80\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-261470_0178b2af504bd1e7e48a8b83b3fc7d80/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-sched
uler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c1fab362fceb821cb61a9c6db1d8d7f02775c8b72358f75c4422f8b6bab43fe7/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-261470_kube-system_0178b2af504bd1e7e48a8b83b3fc7d80_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-261470_kube-system_0178b2af504bd1e7e48a8b83b3fc7d80_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0178b2af504bd1e7e48a8b83b3fc7d80/etc-hosts\",\"readonly\":false},{\"conta
iner_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0178b2af504bd1e7e48a8b83b3fc7d80/containers/kube-scheduler/0cf4ee31\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-261470","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0178b2af504bd1e7e48a8b83b3fc7d80","kubernetes.io/config.hash":"0178b2af504bd1e7e48a8b83b3fc7d80","kubernetes.io/config.seen":"2024-07-17T01:19:18.155589355Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9fa3615","pid":1033,"status":"running"
,"bundle":"/run/containers/storage/overlay-containers/6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9fa3615/userdata","rootfs":"/var/lib/containers/storage/overlay/5af804aa3871df6d9708751b5e7da150a7c95d96d07dfd2dc17d85301c6bca9c/merged","created":"2024-07-17T01:19:21.506264026Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1c682979","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c682979\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9f
a3615","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-07-17T01:19:21.335342913Z","io.kubernetes.cri-o.Image":"b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.24.1","io.kubernetes.cri-o.ImageRef":"b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-261470\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"13b3379bde7afabc3fd11f8c950cd608\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-261470_13b3379bde7afabc3fd11f8c950cd608/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5af804aa3871df6d9708751b5e7da150a7c95d96d07
dfd2dc17d85301c6bca9c/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-261470_kube-system_13b3379bde7afabc3fd11f8c950cd608_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-261470_kube-system_13b3379bde7afabc3fd11f8c950cd608_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/13b3379bde7afabc3fd11f8c950cd608/containers/kube-controller-manager/d7db5d8c\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib
/kubelet/pods/13b3379bde7afabc3fd11f8c950cd608/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-261470","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"13b3379bde7afabc3fd11f8c950cd608","kubernetes.io/config.hash":"13b3379bde7afabc3fd11f8c950cd608","kubernetes.io/config.seen":"2024
-07-17T01:19:18.155588498Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e","pid":1084,"status":"running","bundle":"/run/containers/storage/overlay-containers/72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e/userdata","rootfs":"/var/lib/containers/storage/overlay/1778aeaa76995de577db07c5be39ac0d6f8d4f2a6093e92f37ea048af5d21dd6/merged","created":"2024-07-17T01:19:21.744022323Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1db83f59","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernet
es.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1db83f59\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-07-17T01:19:21.549365297Z","io.kubernetes.cri-o.Image":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri-o.ImageRef":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-261470\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9f4d0550da7f19a1515389f361b7a3dd\"}","io.kub
ernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-261470_9f4d0550da7f19a1515389f361b7a3dd/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1778aeaa76995de577db07c5be39ac0d6f8d4f2a6093e92f37ea048af5d21dd6/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-261470_kube-system_9f4d0550da7f19a1515389f361b7a3dd_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-261470_kube-system_9f4d0550da7f19a1515389f361b7a3dd_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"contai
ner_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9f4d0550da7f19a1515389f361b7a3dd/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9f4d0550da7f19a1515389f361b7a3dd/containers/etcd/7b3b4eb7\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-261470","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9f4d0550da7f19a1515389f361b7a3dd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.235:2379","kubernetes.io/config.hash":"9f4d0550da7f19a1515389f361b7a3dd","kubernetes.io/config.seen":"2024-07-17T01:19:18.155549725Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.s
ystemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d","pid":966,"status":"running","bundle":"/run/containers/storage/overlay-containers/83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d/userdata","rootfs":"/var/lib/containers/storage/overlay/46ff7e75e121360225d38d08145ac3f45a0027e68cdb6490d8b6917c230c8aa8/merged","created":"2024-07-17T01:19:20.862850318Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-07-17T01:19:18.155589355Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"0178b2af504bd1e7e48a8b83b3fc7d80\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod0178b2af504bd1e7e48a8b83b3fc7d80.slice","io.kubern
etes.cri-o.ContainerID":"83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-running-upgrade-261470_kube-system_0178b2af504bd1e7e48a8b83b3fc7d80_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-07-17T01:19:20.729393409Z","io.kubernetes.cri-o.HostName":"running-upgrade-261470","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-running-upgrade-261470","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-261470\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"0178b2af504bd1e7e48a8b83b3fc7d80\",\"io.kubernetes.container.name\":\"P
OD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-261470_0178b2af504bd1e7e48a8b83b3fc7d80/83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-scheduler-running-upgrade-261470\",\"UID\":\"0178b2af504bd1e7e48a8b83b3fc7d80\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/46ff7e75e121360225d38d08145ac3f45a0027e68cdb6490d8b6917c230c8aa8/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-running-upgrade-261470_kube-system_0178b2af504bd1e7e48a8b83b3fc7d80_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d/userdata/resolv.conf","io.kubernetes.
cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-261470","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0178b2af504bd1e7e48a8b83b3fc7d80","kubernetes.io/config.hash":"0178b2af504bd1e7e48a8b83b3fc7d80","kubernetes.io/config.seen":"2024-07-17T01:19:18.155589355Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8","pid":1045,"status":"running","bundle":"/run/containers/storage/overlay-containers/9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8/userdata","
rootfs":"/var/lib/containers/storage/overlay/d2a1899bd50dc670bdcbac67ed7647b0aac457926327f260d0fa6f5ab00f6aaa/merged","created":"2024-07-17T01:19:21.506700101Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6e189733","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6e189733\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-07-17T01:19:21.357422127Z","io.kubernetes.cri-o.
Image":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.24.1","io.kubernetes.cri-o.ImageRef":"e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-261470\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6170aac1aa1731e47d47977a9f22c95c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-261470_6170aac1aa1731e47d47977a9f22c95c/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d2a1899bd50dc670bdcbac67ed7647b0aac457926327f260d0fa6f5ab00f6aaa/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-261470_kube-system_6170aac1aa1731e47d47977a9f22c95c_0","io.kubernetes.cri-o.ResolvPat
h":"/var/run/containers/storage/overlay-containers/9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-261470_kube-system_6170aac1aa1731e47d47977a9f22c95c_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6170aac1aa1731e47d47977a9f22c95c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6170aac1aa1731e47d47977a9f22c95c/containers/kube-apiserver/69534fa4\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_p
ath\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-261470","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6170aac1aa1731e47d47977a9f22c95c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.235:8443","kubernetes.io/config.hash":"6170aac1aa1731e47d47977a9f22c95c","kubernetes.io/config.seen":"2024-07-17T01:19:18.155587194Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a","pid":965,"status":"running","bundle":"/run/containers/storage/o
verlay-containers/9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a/userdata","rootfs":"/var/lib/containers/storage/overlay/3a4b2986fc9d5fb28d0c2ce5c6d4021253b58ab1b565022e96b1ab287b2cacbb/merged","created":"2024-07-17T01:19:20.864314196Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"6170aac1aa1731e47d47977a9f22c95c\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.50.235:8443\",\"kubernetes.io/config.seen\":\"2024-07-17T01:19:18.155587194Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod6170aac1aa1731e47d47977a9f22c95c.slice","io.kubernetes.cri-o.ContainerID":"9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-running-upgrade-261470_kube-system_6170aac1aa1731e47d47977a9f22c95c_0","io.kubernetes.cri-o.Con
tainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-07-17T01:19:20.738416435Z","io.kubernetes.cri-o.HostName":"running-upgrade-261470","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-running-upgrade-261470","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"6170aac1aa1731e47d47977a9f22c95c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-261470\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-261470_6170aac1aa1731e47d47977a9f22c95c/9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a.log","io.kubernetes.cri-o.Metadata":"{\"Name\"
:\"kube-apiserver-running-upgrade-261470\",\"UID\":\"6170aac1aa1731e47d47977a9f22c95c\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3a4b2986fc9d5fb28d0c2ce5c6d4021253b58ab1b565022e96b1ab287b2cacbb/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-running-upgrade-261470_kube-system_6170aac1aa1731e47d47977a9f22c95c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/over
lay-containers/9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-261470","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"6170aac1aa1731e47d47977a9f22c95c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.235:8443","kubernetes.io/config.hash":"6170aac1aa1731e47d47977a9f22c95c","kubernetes.io/config.seen":"2024-07-17T01:19:18.155587194Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f","pid":982,"status":"running","bundle":"/run/containers/storage/overlay-containers/ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f/userdata","rootfs":"/var/lib/containers/storage/overlay/15029aa11a451305585999272ef738a9ec9feb10b90ee1b3c2f448a198588264/merged","created":"2024-07-17T01:19:20.9195945Z",
"annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.50.235:2379\",\"kubernetes.io/config.seen\":\"2024-07-17T01:19:18.155549725Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"9f4d0550da7f19a1515389f361b7a3dd\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod9f4d0550da7f19a1515389f361b7a3dd.slice","io.kubernetes.cri-o.ContainerID":"ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-running-upgrade-261470_kube-system_9f4d0550da7f19a1515389f361b7a3dd_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-07-17T01:19:20.693990368Z","io.kubernetes.cri-o.HostName":"running-upgrade-261470","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/ab5a4cf84cf32e808
e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-running-upgrade-261470","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"etcd-running-upgrade-261470\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"9f4d0550da7f19a1515389f361b7a3dd\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-261470_9f4d0550da7f19a1515389f361b7a3dd/ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"etcd-running-upgrade-261470\",\"UID\":\"9f4d0550da7f19a1515389f361b7a3dd\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/15029aa11a451305585999272ef738a9ec9feb10b90ee1b3c2f448a198588264/merged","io.kubernetes.cri-o.Name":"k8s_etcd-running-upgrade-26
1470_kube-system_9f4d0550da7f19a1515389f361b7a3dd_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f/userdata/shm","io.kubernetes.pod.name":"etcd-running-upgrade-261470","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"9f4d0550da7f19a1515389f361b7a3dd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.235:2379","kubernet
es.io/config.hash":"9f4d0550da7f19a1515389f361b7a3dd","kubernetes.io/config.seen":"2024-07-17T01:19:18.155549725Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5","pid":956,"status":"running","bundle":"/run/containers/storage/overlay-containers/e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5/userdata","rootfs":"/var/lib/containers/storage/overlay/51fb157ead3f4703e869c3108c8487ba27fedbabf8f4f6b316ef54a60b63bb1b/merged","created":"2024-07-17T01:19:20.848979462Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"13b3379bde7afabc3fd11f8c950cd608\",\"kubernetes.io/config.seen\":\"2024-07-17T01:19:18.155588498Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.Cgro
upParent":"kubepods-burstable-pod13b3379bde7afabc3fd11f8c950cd608.slice","io.kubernetes.cri-o.ContainerID":"e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-running-upgrade-261470_kube-system_13b3379bde7afabc3fd11f8c950cd608_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-07-17T01:19:20.715455103Z","io.kubernetes.cri-o.HostName":"running-upgrade-261470","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-running-upgrade-261470","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-261470\",\"io.kubernetes.container.name\":\"POD\",\"compon
ent\":\"kube-controller-manager\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"13b3379bde7afabc3fd11f8c950cd608\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-261470_13b3379bde7afabc3fd11f8c950cd608/e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5.log","io.kubernetes.cri-o.Metadata":"{\"Name\":\"kube-controller-manager-running-upgrade-261470\",\"UID\":\"13b3379bde7afabc3fd11f8c950cd608\",\"Namespace\":\"kube-system\",\"Attempt\":0}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/51fb157ead3f4703e869c3108c8487ba27fedbabf8f4f6b316ef54a60b63bb1b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-running-upgrade-261470_kube-system_13b3379bde7afabc3fd11f8c950cd608_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/va
r/run/containers/storage/overlay-containers/e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-261470","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"13b3379bde7afabc3fd11f8c950cd608","kubernetes.io/config.hash":"13b3379bde7afabc3fd11f8c950cd608","kubernetes.io/config.seen":"2024-07-17T01:19:18.155588498Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0717 01:19:53.022584   63629 cri.go:126] list returned 8 containers
	I0717 01:19:53.022605   63629 cri.go:129] container: {ID:2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e Status:running}
	I0717 01:19:53.022634   63629 cri.go:135] skipping {2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e running}: state = "running", want "paused"
	I0717 01:19:53.022654   63629 cri.go:129] container: {ID:6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9fa3615 Status:running}
	I0717 01:19:53.022663   63629 cri.go:135] skipping {6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9fa3615 running}: state = "running", want "paused"
	I0717 01:19:53.022673   63629 cri.go:129] container: {ID:72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e Status:running}
	I0717 01:19:53.022682   63629 cri.go:135] skipping {72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e running}: state = "running", want "paused"
	I0717 01:19:53.022692   63629 cri.go:129] container: {ID:83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d Status:running}
	I0717 01:19:53.022703   63629 cri.go:131] skipping 83ff5e0565f53893b1e79ee5659c56908c88360fb6fee2899fbda5b96185646d - not in ps
	I0717 01:19:53.022712   63629 cri.go:129] container: {ID:9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8 Status:running}
	I0717 01:19:53.022722   63629 cri.go:135] skipping {9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8 running}: state = "running", want "paused"
	I0717 01:19:53.022732   63629 cri.go:129] container: {ID:9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a Status:running}
	I0717 01:19:53.022742   63629 cri.go:131] skipping 9d9449e1f100c78da1f9b8e6de8f74c5c8814002e1b2b8be189095480104d20a - not in ps
	I0717 01:19:53.022751   63629 cri.go:129] container: {ID:ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f Status:running}
	I0717 01:19:53.022758   63629 cri.go:131] skipping ab5a4cf84cf32e808e8f8bbe5ed8c84006efb596918101ff11faa48a25f96e7f - not in ps
	I0717 01:19:53.022767   63629 cri.go:129] container: {ID:e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5 Status:running}
	I0717 01:19:53.022774   63629 cri.go:131] skipping e1c43cb6ed57445687c6c2e9b52d87aaf774174d09acde4ee47eec59de1e00b5 - not in ps
	I0717 01:19:53.022839   63629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0717 01:19:53.032578   63629 kubeadm.go:405] apiserver tunnel failed: apiserver port not set
	I0717 01:19:53.032600   63629 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:19:53.032606   63629 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:19:53.032668   63629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:19:53.040565   63629 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:19:53.041293   63629 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-261470" does not appear in /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:19:53.041640   63629 kubeconfig.go:62] /home/jenkins/minikube-integration/19265-12897/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-261470" cluster setting kubeconfig missing "running-upgrade-261470" context setting]
	I0717 01:19:53.042316   63629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:19:53.043351   63629 kapi.go:59] client config for running-upgrade-261470: &rest.Config{Host:"https://192.168.50.235:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/client.crt", KeyFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/profiles/running-upgrade-261470/client.key", CAFile:"/home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 01:19:53.044007   63629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:19:53.052606   63629 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "running-upgrade-261470"
	   kubeletExtraArgs:
	     node-ip: 192.168.50.235
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0717 01:19:53.052626   63629 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:19:53.052640   63629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:19:53.052689   63629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:19:53.090717   63629 cri.go:89] found id: "2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e"
	I0717 01:19:53.090744   63629 cri.go:89] found id: "72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e"
	I0717 01:19:53.090750   63629 cri.go:89] found id: "9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8"
	I0717 01:19:53.090754   63629 cri.go:89] found id: "6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9fa3615"
	I0717 01:19:53.090759   63629 cri.go:89] found id: ""
	I0717 01:19:53.090765   63629 cri.go:234] Stopping containers: [2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e 72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e 9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8 6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9fa3615]
	I0717 01:19:53.090819   63629 ssh_runner.go:195] Run: which crictl
	I0717 01:19:53.095635   63629 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 2c5062778a9b9b57d631ff48492af69331a6bc8b7f2334d48fa7c7e016963b8e 72c783561602c877cfcbbe417b6c9a55bdd64c6336c14f3895db37501a2af95e 9ab44fa44375d114e2600c26fadebed451560e8719cf4095fc2ddad3281f0cc8 6d4e7db9d9fb681f7caa22ea7aa7c68cc153dd9d480cb9c7e535dc35a9fa3615
	I0717 01:19:50.406837   63345 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0717 01:19:50.414328   63345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:19:50.658905   63345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:19:50.678321   63345 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236 for IP: 192.168.39.195
	I0717 01:19:50.678349   63345 certs.go:194] generating shared ca certs ...
	I0717 01:19:50.678372   63345 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:19:50.678560   63345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:19:50.678634   63345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:19:50.678648   63345 certs.go:256] generating profile certs ...
	I0717 01:19:50.678758   63345 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/client.key
	I0717 01:19:50.678828   63345 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.key.facb88c7
	I0717 01:19:50.678888   63345 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.key
	I0717 01:19:50.679049   63345 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:19:50.679095   63345 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:19:50.679110   63345 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:19:50.679152   63345 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:19:50.679188   63345 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:19:50.679222   63345 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:19:50.679282   63345 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:19:50.680106   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:19:50.727821   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:19:50.771358   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:19:50.813740   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:19:50.863413   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 01:19:50.932449   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:19:51.003027   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:19:51.077779   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kubernetes-upgrade-729236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:19:51.129380   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:19:51.169325   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:19:51.221732   63345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:19:51.292863   63345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:19:51.327759   63345 ssh_runner.go:195] Run: openssl version
	I0717 01:19:51.339771   63345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:19:51.353937   63345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:19:51.380020   63345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:19:51.380082   63345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:19:51.387377   63345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:19:51.407085   63345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:19:51.420802   63345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:19:51.425711   63345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:19:51.425768   63345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:19:51.432441   63345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:19:51.447474   63345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:19:51.464263   63345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:19:51.471307   63345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:19:51.471359   63345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:19:51.478805   63345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:19:51.496381   63345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:19:51.503865   63345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:19:51.514059   63345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:19:51.525426   63345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:19:51.535954   63345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:19:51.545753   63345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:19:51.556028   63345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:19:51.565176   63345 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-729236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-729236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:19:51.565277   63345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:19:51.565359   63345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:19:51.622139   63345 cri.go:89] found id: "9e3f73754e1198c1cbbe82f707a8983559ebe56e72757a4839d22ed537f9fc17"
	I0717 01:19:51.622162   63345 cri.go:89] found id: "a71932bab3f722e6f5867c255b3dafe8c6362cf6020b99ce0c4743c81a31d73d"
	I0717 01:19:51.622170   63345 cri.go:89] found id: "229c37e05607e76a56bd9cdd66954e9144e2a481ff866a2e3f44e78a4c0b143a"
	I0717 01:19:51.622175   63345 cri.go:89] found id: "eec846700ed08fbf020709d28292c7cdb05c4bd5843dd76f4b74636e781cc74b"
	I0717 01:19:51.622180   63345 cri.go:89] found id: "490c4b648bde1647a9c296f85e78138ab334a3bee8be625dfb7e8c7dc1da51ff"
	I0717 01:19:51.622186   63345 cri.go:89] found id: "4814430a055ce45aeccb84e423a1f16db138818b1f955e0f0282af2b7379aa6e"
	I0717 01:19:51.622191   63345 cri.go:89] found id: "074f63c61b854f61521bffc10a196167ec67b1a7541ffc09bbe493d19dacdca3"
	I0717 01:19:51.622196   63345 cri.go:89] found id: "7cf5a0791427ade8d3bac95d0005e50ee466ccafe455e134289694d01480c0a3"
	I0717 01:19:51.622201   63345 cri.go:89] found id: "dde34309254ef2407d3457ade19e0e0cd32ccaf0a9da6fde033045010f1e73f4"
	I0717 01:19:51.622210   63345 cri.go:89] found id: "852fe5cadcacef32b048cf476e7dfc909249ce82207793544a873f4a33cd0118"
	I0717 01:19:51.622219   63345 cri.go:89] found id: "ab37d26523f6a2668d82dc822c1013bb3854fe361821dc1eced4cf6a8927c821"
	I0717 01:19:51.622227   63345 cri.go:89] found id: "1b38276e829cfb8c3a64742083106eb313073df4a1e732792379d7ef1d319016"
	I0717 01:19:51.622236   63345 cri.go:89] found id: "d495543c156c1af0167455ceb1c6476f1f25a369251ead6dc487020ec8ad2ea2"
	I0717 01:19:51.622242   63345 cri.go:89] found id: "ce9d3843b448e2f319be38445fc0d2063e9c1ad742574c8138b73eac3a390843"
	I0717 01:19:51.622252   63345 cri.go:89] found id: ""
	I0717 01:19:51.622296   63345 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 01:20:21 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:21.965514880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=156cc386-89d1-4656-8f41-966831757eb7 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:20:21 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:21.966536997Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2b2c7f4-2a3e-4649-b1d6-997b6989095f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:20:21 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:21.966896555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179221966874723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2b2c7f4-2a3e-4649-b1d6-997b6989095f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:20:21 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:21.967348924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e45b6a91-5260-494d-9069-41db96d82bed name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:21 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:21.967446866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e45b6a91-5260-494d-9069-41db96d82bed name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:21 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:21.968119775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c838cca9bd2305d8df2e156f6fe6b8c948ddd9b21dcf37536fe40254b95b659a,PodSandboxId:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179218139983492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e45595ff3111d2bedd6c02f0d5f6f1f5ef93aad7337d7ad0cf1176b8f97f9,PodSandboxId:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179218104702259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab9d3bd6ced02e79ef19e44fa25b950a22b45b951e9f9b793d381d373f1d59d,PodSandboxId:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721179218111225010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f852a277ff1e7316e0ce8bfba6923032f119c74fe53f5208ae327118ca5bedc,PodSandboxId:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721179214311396892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 578e3a82bd28d3c6e10086b804f01a3e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32b5d7d9bad3a1554046a6d33479cb76c33a1bfcb4bbeedbfabf269fbe09d01,PodSandboxId:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721179214266982285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e356,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249cee49b59dd1b9876e6b7ebc0c455c8d488a3ed3b914652247caffa9fd71d0,PodSandboxId:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721179214251098275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a1
85d3ef5ad8fb457f13137e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55fb778191054c0bb45b523b2d29a49d2b6678d325984e9d8b888d01354088c,PodSandboxId:e2bf9f96c227837a5bf305592ddda98480fb2eca2a97ab14d033fa0afdd8edac,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179210011384855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,},Annota
tions:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47984958b96fa41b9f990958cad40a5e2bdb5f6fd52bcb610ae48839ddede51f,PodSandboxId:c587edda714b211e2141d00a61c3541bcfa9e4ad950b7ec751740da72081e55d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721179208994985640,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71932bab3f722e6f5867c255b3dafe8c6362cf6020b99ce0c4743c81a31d73d,PodSandboxId:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721179190064605942,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3f73754e1198c1cbbe82f707a8983559ebe56e72757a4839d22ed537f9fc17,PodSandboxId:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721179190954166283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.
kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229c37e05607e76a56bd9cdd66954e9144e2a481ff866a2e3f44e78a4c0b143a,PodSandboxId:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179190027965982,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490c4b648bde1647a9c296f85e78138ab334a3bee8be625dfb7e8c7dc1da51ff,PodSandboxId:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37
af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721179189658115580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a185d3ef5ad8fb457f13137e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec846700ed08fbf020709d28292c7cdb05c4bd5843dd76f4b74636e781cc74b,PodSandboxId:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba08
7f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721179189708016892,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e356,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4814430a055ce45aeccb84e423a1f16db138818b1f955e0f0282af2b7379aa6e,PodSandboxId:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991
aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721179189571592807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578e3a82bd28d3c6e10086b804f01a3e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf5a0791427ade8d3bac95d0005e50ee466ccafe455e134289694d01480c0a3,PodSandboxId:2969f98a0db9444fe57d06681617fe060af85631b769010bf8306f85af1ebda7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018a
b909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721179176937117492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d495543c156c1af0167455ceb1c6476f1f25a369251ead6dc487020ec8ad2ea2,PodSandboxId:5c319b8dab8c3146aa95d12aca052b069e6edfdf8efe26bb670af4268d353b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721179175627263905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e45b6a91-5260-494d-9069-41db96d82bed name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.013258897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9f3cf1d-b65e-48b8-8d34-9ed7dff5238b name=/runtime.v1.RuntimeService/Version
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.013347919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9f3cf1d-b65e-48b8-8d34-9ed7dff5238b name=/runtime.v1.RuntimeService/Version
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.014416061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50f43933-9312-4a7a-8aea-197df1a46046 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.014935825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179222014909800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50f43933-9312-4a7a-8aea-197df1a46046 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.015461258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc80ded1-1eaa-4da8-8a2f-75427b0af920 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.015587115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc80ded1-1eaa-4da8-8a2f-75427b0af920 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.016353914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c838cca9bd2305d8df2e156f6fe6b8c948ddd9b21dcf37536fe40254b95b659a,PodSandboxId:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179218139983492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e45595ff3111d2bedd6c02f0d5f6f1f5ef93aad7337d7ad0cf1176b8f97f9,PodSandboxId:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179218104702259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab9d3bd6ced02e79ef19e44fa25b950a22b45b951e9f9b793d381d373f1d59d,PodSandboxId:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721179218111225010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f852a277ff1e7316e0ce8bfba6923032f119c74fe53f5208ae327118ca5bedc,PodSandboxId:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721179214311396892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 578e3a82bd28d3c6e10086b804f01a3e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32b5d7d9bad3a1554046a6d33479cb76c33a1bfcb4bbeedbfabf269fbe09d01,PodSandboxId:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721179214266982285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e356,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249cee49b59dd1b9876e6b7ebc0c455c8d488a3ed3b914652247caffa9fd71d0,PodSandboxId:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721179214251098275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a1
85d3ef5ad8fb457f13137e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55fb778191054c0bb45b523b2d29a49d2b6678d325984e9d8b888d01354088c,PodSandboxId:e2bf9f96c227837a5bf305592ddda98480fb2eca2a97ab14d033fa0afdd8edac,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179210011384855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,},Annota
tions:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47984958b96fa41b9f990958cad40a5e2bdb5f6fd52bcb610ae48839ddede51f,PodSandboxId:c587edda714b211e2141d00a61c3541bcfa9e4ad950b7ec751740da72081e55d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721179208994985640,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71932bab3f722e6f5867c255b3dafe8c6362cf6020b99ce0c4743c81a31d73d,PodSandboxId:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721179190064605942,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3f73754e1198c1cbbe82f707a8983559ebe56e72757a4839d22ed537f9fc17,PodSandboxId:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721179190954166283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.
kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229c37e05607e76a56bd9cdd66954e9144e2a481ff866a2e3f44e78a4c0b143a,PodSandboxId:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179190027965982,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490c4b648bde1647a9c296f85e78138ab334a3bee8be625dfb7e8c7dc1da51ff,PodSandboxId:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37
af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721179189658115580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a185d3ef5ad8fb457f13137e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec846700ed08fbf020709d28292c7cdb05c4bd5843dd76f4b74636e781cc74b,PodSandboxId:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba08
7f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721179189708016892,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e356,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4814430a055ce45aeccb84e423a1f16db138818b1f955e0f0282af2b7379aa6e,PodSandboxId:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991
aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721179189571592807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578e3a82bd28d3c6e10086b804f01a3e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf5a0791427ade8d3bac95d0005e50ee466ccafe455e134289694d01480c0a3,PodSandboxId:2969f98a0db9444fe57d06681617fe060af85631b769010bf8306f85af1ebda7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018a
b909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721179176937117492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d495543c156c1af0167455ceb1c6476f1f25a369251ead6dc487020ec8ad2ea2,PodSandboxId:5c319b8dab8c3146aa95d12aca052b069e6edfdf8efe26bb670af4268d353b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721179175627263905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc80ded1-1eaa-4da8-8a2f-75427b0af920 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.039962320Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=bf3bbb54-512b-46c3-a17d-30979b94b284 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.040203263Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-rtvfk,Uid:6e4b028e-165e-4fd0-af50-0932be779cde,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721179189306719844,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:19:04.282173979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2bf9f96c227837a5bf305592ddda98480fb2eca2a97ab14d033fa0afdd8edac,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-wtvw4,Uid:48498faa-fb02-4d48-91e4-03369281f4fd,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721179189263186058,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:19:04.250151035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-729236,Uid:e850d1f77388a2c8aa82a3b26f74e356,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721179189178404274,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e35
6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e850d1f77388a2c8aa82a3b26f74e356,kubernetes.io/config.seen: 2024-07-17T01:18:51.910634654Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-729236,Uid:578e3a82bd28d3c6e10086b804f01a3e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721179189156302524,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578e3a82bd28d3c6e10086b804f01a3e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.195:8443,kubernetes.io/config.hash: 578e3a82bd28d3c6e10086b804f01a3e,kubernetes.io/config.seen: 2024-07-17T01:18:51.910630714Z,kubernetes.io/config
.source: file,},RuntimeHandler:,},&PodSandbox{Id:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&PodSandboxMetadata{Name:kube-proxy-x64th,Uid:56f909da-6f89-430a-bdd4-132e0e4dcf7d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721179189147362101,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:19:04.358860895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2484fdd1-6708-460c-b752-71518a2a8fc4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721179189126945863,Labels:map[string]string{addonmanager.kubernete
s.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T01:1
9:05.104218762Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-729236,Uid:7234cd4a185d3ef5ad8fb457f13137e1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721179189073595353,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a185d3ef5ad8fb457f13137e1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.195:2379,kubernetes.io/config.hash: 7234cd4a185d3ef5ad8fb457f13137e1,kubernetes.io/config.seen: 2024-07-17T01:18:51.960071460Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c587edda714b211e2141d00a61c3541bcfa9e4ad950b7ec751740da72081e55d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-729236,Uid:7
55be0600fe6b648cf1a1899edc4351e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721179188971124671,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 755be0600fe6b648cf1a1899edc4351e,kubernetes.io/config.seen: 2024-07-17T01:18:51.910635691Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2969f98a0db9444fe57d06681617fe060af85631b769010bf8306f85af1ebda7,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-wtvw4,Uid:48498faa-fb02-4d48-91e4-03369281f4fd,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1721179175716827350,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 48498faa-fb02-4d48-91e4-03369281f4fd,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:19:04.250151035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c319b8dab8c3146aa95d12aca052b069e6edfdf8efe26bb670af4268d353b68,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-729236,Uid:755be0600fe6b648cf1a1899edc4351e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1721179175257572752,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 755be0600fe6b648cf1a1899edc4351e,kubernetes.io/config.seen: 2024-07-17T01:18:51.910635691Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74
" id=bf3bbb54-512b-46c3-a17d-30979b94b284 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.041037639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61bdf318-e6d6-46c4-ad19-6bc349f2b4ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.041101049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61bdf318-e6d6-46c4-ad19-6bc349f2b4ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.041468388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c838cca9bd2305d8df2e156f6fe6b8c948ddd9b21dcf37536fe40254b95b659a,PodSandboxId:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179218139983492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e45595ff3111d2bedd6c02f0d5f6f1f5ef93aad7337d7ad0cf1176b8f97f9,PodSandboxId:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179218104702259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab9d3bd6ced02e79ef19e44fa25b950a22b45b951e9f9b793d381d373f1d59d,PodSandboxId:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721179218111225010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f852a277ff1e7316e0ce8bfba6923032f119c74fe53f5208ae327118ca5bedc,PodSandboxId:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721179214311396892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 578e3a82bd28d3c6e10086b804f01a3e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32b5d7d9bad3a1554046a6d33479cb76c33a1bfcb4bbeedbfabf269fbe09d01,PodSandboxId:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721179214266982285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e356,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249cee49b59dd1b9876e6b7ebc0c455c8d488a3ed3b914652247caffa9fd71d0,PodSandboxId:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721179214251098275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a1
85d3ef5ad8fb457f13137e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55fb778191054c0bb45b523b2d29a49d2b6678d325984e9d8b888d01354088c,PodSandboxId:e2bf9f96c227837a5bf305592ddda98480fb2eca2a97ab14d033fa0afdd8edac,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179210011384855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,},Annota
tions:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47984958b96fa41b9f990958cad40a5e2bdb5f6fd52bcb610ae48839ddede51f,PodSandboxId:c587edda714b211e2141d00a61c3541bcfa9e4ad950b7ec751740da72081e55d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721179208994985640,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71932bab3f722e6f5867c255b3dafe8c6362cf6020b99ce0c4743c81a31d73d,PodSandboxId:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721179190064605942,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3f73754e1198c1cbbe82f707a8983559ebe56e72757a4839d22ed537f9fc17,PodSandboxId:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721179190954166283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.
kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229c37e05607e76a56bd9cdd66954e9144e2a481ff866a2e3f44e78a4c0b143a,PodSandboxId:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179190027965982,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490c4b648bde1647a9c296f85e78138ab334a3bee8be625dfb7e8c7dc1da51ff,PodSandboxId:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37
af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721179189658115580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a185d3ef5ad8fb457f13137e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec846700ed08fbf020709d28292c7cdb05c4bd5843dd76f4b74636e781cc74b,PodSandboxId:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba08
7f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721179189708016892,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e356,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4814430a055ce45aeccb84e423a1f16db138818b1f955e0f0282af2b7379aa6e,PodSandboxId:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991
aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721179189571592807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578e3a82bd28d3c6e10086b804f01a3e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf5a0791427ade8d3bac95d0005e50ee466ccafe455e134289694d01480c0a3,PodSandboxId:2969f98a0db9444fe57d06681617fe060af85631b769010bf8306f85af1ebda7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018a
b909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721179176937117492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d495543c156c1af0167455ceb1c6476f1f25a369251ead6dc487020ec8ad2ea2,PodSandboxId:5c319b8dab8c3146aa95d12aca052b069e6edfdf8efe26bb670af4268d353b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721179175627263905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61bdf318-e6d6-46c4-ad19-6bc349f2b4ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.055020471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c19fa4c-a1b6-4def-a743-72b343912dbf name=/runtime.v1.RuntimeService/Version
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.055113015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c19fa4c-a1b6-4def-a743-72b343912dbf name=/runtime.v1.RuntimeService/Version
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.056112266Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92f48a28-b616-4da8-8b15-ac9ad28641d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.056576507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179222056462302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92f48a28-b616-4da8-8b15-ac9ad28641d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.057135698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcf03c92-6870-4272-8d98-99e20856fdeb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.057213293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcf03c92-6870-4272-8d98-99e20856fdeb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:20:22 kubernetes-upgrade-729236 crio[3043]: time="2024-07-17 01:20:22.057560734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c838cca9bd2305d8df2e156f6fe6b8c948ddd9b21dcf37536fe40254b95b659a,PodSandboxId:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179218139983492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e45595ff3111d2bedd6c02f0d5f6f1f5ef93aad7337d7ad0cf1176b8f97f9,PodSandboxId:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179218104702259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab9d3bd6ced02e79ef19e44fa25b950a22b45b951e9f9b793d381d373f1d59d,PodSandboxId:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721179218111225010,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f852a277ff1e7316e0ce8bfba6923032f119c74fe53f5208ae327118ca5bedc,PodSandboxId:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721179214311396892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 578e3a82bd28d3c6e10086b804f01a3e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32b5d7d9bad3a1554046a6d33479cb76c33a1bfcb4bbeedbfabf269fbe09d01,PodSandboxId:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721179214266982285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e356,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249cee49b59dd1b9876e6b7ebc0c455c8d488a3ed3b914652247caffa9fd71d0,PodSandboxId:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721179214251098275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a1
85d3ef5ad8fb457f13137e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55fb778191054c0bb45b523b2d29a49d2b6678d325984e9d8b888d01354088c,PodSandboxId:e2bf9f96c227837a5bf305592ddda98480fb2eca2a97ab14d033fa0afdd8edac,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179210011384855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,},Annota
tions:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47984958b96fa41b9f990958cad40a5e2bdb5f6fd52bcb610ae48839ddede51f,PodSandboxId:c587edda714b211e2141d00a61c3541bcfa9e4ad950b7ec751740da72081e55d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721179208994985640,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71932bab3f722e6f5867c255b3dafe8c6362cf6020b99ce0c4743c81a31d73d,PodSandboxId:821ad6dc686cda94078b7d24f44a6b697a4da62c9c2d08e1c290f337ed252690,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721179190064605942,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x64th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f909da-6f89-430a-bdd4-132e0e4dcf7d,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3f73754e1198c1cbbe82f707a8983559ebe56e72757a4839d22ed537f9fc17,PodSandboxId:3ffcab309c3275436f28a3b1caa8aac8682924c2e56450936be89b9067034c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721179190954166283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.
kubernetes.pod.name: coredns-5cfdc65f69-rtvfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4b028e-165e-4fd0-af50-0932be779cde,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229c37e05607e76a56bd9cdd66954e9144e2a481ff866a2e3f44e78a4c0b143a,PodSandboxId:d10e16d3b033099e51953d458fcc9c53e48588c171cb713d268edb0bdb522e95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179190027965982,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2484fdd1-6708-460c-b752-71518a2a8fc4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:490c4b648bde1647a9c296f85e78138ab334a3bee8be625dfb7e8c7dc1da51ff,PodSandboxId:34ce203c05d86b0b41823e2f068261e62f83ce355120ac1fac21d8c1e295370a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37
af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721179189658115580,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7234cd4a185d3ef5ad8fb457f13137e1,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec846700ed08fbf020709d28292c7cdb05c4bd5843dd76f4b74636e781cc74b,PodSandboxId:4b20b155911b3a97e20c42a55f4abc8a26b4116860cd9029cbb914d54118d3b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba08
7f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721179189708016892,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e850d1f77388a2c8aa82a3b26f74e356,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4814430a055ce45aeccb84e423a1f16db138818b1f955e0f0282af2b7379aa6e,PodSandboxId:b258750b706a9660565eff33570c939fe6ec61858a2b03ad7a78b405398fb3a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991
aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721179189571592807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578e3a82bd28d3c6e10086b804f01a3e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf5a0791427ade8d3bac95d0005e50ee466ccafe455e134289694d01480c0a3,PodSandboxId:2969f98a0db9444fe57d06681617fe060af85631b769010bf8306f85af1ebda7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018a
b909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721179176937117492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-wtvw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48498faa-fb02-4d48-91e4-03369281f4fd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d495543c156c1af0167455ceb1c6476f1f25a369251ead6dc487020ec8ad2ea2,PodSandboxId:5c319b8dab8c3146aa95d12aca052b069e6edfdf8efe26bb670af4268d353b68,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721179175627263905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 755be0600fe6b648cf1a1899edc4351e,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcf03c92-6870-4272-8d98-99e20856fdeb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c838cca9bd230       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       3                   d10e16d3b0330       storage-provisioner
	3ab9d3bd6ced0       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   4 seconds ago       Running             kube-proxy                3                   821ad6dc686cd       kube-proxy-x64th
	985e45595ff31       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   3                   3ffcab309c327       coredns-5cfdc65f69-rtvfk
	0f852a277ff1e       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            3                   b258750b706a9       kube-apiserver-kubernetes-upgrade-729236
	b32b5d7d9bad3       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   3                   4b20b155911b3       kube-controller-manager-kubernetes-upgrade-729236
	249cee49b59dd       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      3                   34ce203c05d86       etcd-kubernetes-upgrade-729236
	f55fb77819105       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 seconds ago      Running             coredns                   2                   e2bf9f96c2278       coredns-5cfdc65f69-wtvw4
	47984958b96fa       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   13 seconds ago      Running             kube-scheduler            2                   c587edda714b2       kube-scheduler-kubernetes-upgrade-729236
	9e3f73754e119       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   31 seconds ago      Exited              coredns                   2                   3ffcab309c327       coredns-5cfdc65f69-rtvfk
	a71932bab3f72       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   32 seconds ago      Exited              kube-proxy                2                   821ad6dc686cd       kube-proxy-x64th
	229c37e05607e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   32 seconds ago      Exited              storage-provisioner       2                   d10e16d3b0330       storage-provisioner
	eec846700ed08       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   32 seconds ago      Exited              kube-controller-manager   2                   4b20b155911b3       kube-controller-manager-kubernetes-upgrade-729236
	490c4b648bde1       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   32 seconds ago      Exited              etcd                      2                   34ce203c05d86       etcd-kubernetes-upgrade-729236
	4814430a055ce       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   32 seconds ago      Exited              kube-apiserver            2                   b258750b706a9       kube-apiserver-kubernetes-upgrade-729236
	7cf5a0791427a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   45 seconds ago      Exited              coredns                   1                   2969f98a0db94       coredns-5cfdc65f69-wtvw4
	d495543c156c1       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   46 seconds ago      Exited              kube-scheduler            1                   5c319b8dab8c3       kube-scheduler-kubernetes-upgrade-729236
	
	
	==> coredns [7cf5a0791427ade8d3bac95d0005e50ee466ccafe455e134289694d01480c0a3] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [985e45595ff3111d2bedd6c02f0d5f6f1f5ef93aad7337d7ad0cf1176b8f97f9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9e3f73754e1198c1cbbe82f707a8983559ebe56e72757a4839d22ed537f9fc17] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f55fb778191054c0bb45b523b2d29a49d2b6678d325984e9d8b888d01354088c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-729236
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-729236
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:18:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-729236
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:20:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:20:17 +0000   Wed, 17 Jul 2024 01:18:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:20:17 +0000   Wed, 17 Jul 2024 01:18:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:20:17 +0000   Wed, 17 Jul 2024 01:18:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:20:17 +0000   Wed, 17 Jul 2024 01:18:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    kubernetes-upgrade-729236
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 54e5713bc6fe4b32b66ca8f92ad2425c
	  System UUID:                54e5713b-c6fe-4b32-b66c-a8f92ad2425c
	  Boot ID:                    a0a085a7-0357-4446-8de7-ae9fb95862bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-rtvfk                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     78s
	  kube-system                 coredns-5cfdc65f69-wtvw4                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     78s
	  kube-system                 etcd-kubernetes-upgrade-729236                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         81s
	  kube-system                 kube-apiserver-kubernetes-upgrade-729236             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-729236    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-x64th                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-kubernetes-upgrade-729236             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s (x8 over 91s)  kubelet          Node kubernetes-upgrade-729236 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x8 over 91s)  kubelet          Node kubernetes-upgrade-729236 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x7 over 91s)  kubelet          Node kubernetes-upgrade-729236 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           79s                node-controller  Node kubernetes-upgrade-729236 event: Registered Node kubernetes-upgrade-729236 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node kubernetes-upgrade-729236 event: Registered Node kubernetes-upgrade-729236 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x9 over 9s)    kubelet          Node kubernetes-upgrade-729236 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-729236 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-729236 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-729236 event: Registered Node kubernetes-upgrade-729236 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.783065] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.078602] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070558] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.223390] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.125099] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.314704] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +4.602819] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.093277] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.552722] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[Jul17 01:19] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.207748] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[ +30.931901] kauditd_printk_skb: 101 callbacks suppressed
	[  +1.345561] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.247803] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +0.360929] systemd-fstab-generator[2915]: Ignoring "noauto" option for root device
	[  +0.353371] systemd-fstab-generator[2962]: Ignoring "noauto" option for root device
	[  +0.511068] systemd-fstab-generator[3028]: Ignoring "noauto" option for root device
	[ +10.669513] kauditd_printk_skb: 207 callbacks suppressed
	[  +1.662521] systemd-fstab-generator[3946]: Ignoring "noauto" option for root device
	[  +3.521730] kauditd_printk_skb: 123 callbacks suppressed
	[Jul17 01:20] systemd-fstab-generator[4549]: Ignoring "noauto" option for root device
	[  +0.782084] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.629857] systemd-fstab-generator[5012]: Ignoring "noauto" option for root device
	[  +0.135411] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [249cee49b59dd1b9876e6b7ebc0c455c8d488a3ed3b914652247caffa9fd71d0] <==
	{"level":"info","ts":"2024-07-17T01:20:14.537632Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e260bcd32c6c8b35","local-member-id":"324857e3fe6e5c62","added-peer-id":"324857e3fe6e5c62","added-peer-peer-urls":["https://192.168.39.195:2380"]}
	{"level":"info","ts":"2024-07-17T01:20:14.53776Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e260bcd32c6c8b35","local-member-id":"324857e3fe6e5c62","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:20:14.53781Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:20:14.536783Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:20:14.540093Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:20:14.540307Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"324857e3fe6e5c62","initial-advertise-peer-urls":["https://192.168.39.195:2380"],"listen-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.195:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:20:14.540347Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:20:14.540393Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-17T01:20:14.540416Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-17T01:20:15.700884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 is starting a new election at term 4"}
	{"level":"info","ts":"2024-07-17T01:20:15.701016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-07-17T01:20:15.701062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 received MsgPreVoteResp from 324857e3fe6e5c62 at term 4"}
	{"level":"info","ts":"2024-07-17T01:20:15.701092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became candidate at term 5"}
	{"level":"info","ts":"2024-07-17T01:20:15.701117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 received MsgVoteResp from 324857e3fe6e5c62 at term 5"}
	{"level":"info","ts":"2024-07-17T01:20:15.70115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became leader at term 5"}
	{"level":"info","ts":"2024-07-17T01:20:15.701176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 324857e3fe6e5c62 elected leader 324857e3fe6e5c62 at term 5"}
	{"level":"info","ts":"2024-07-17T01:20:15.707092Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"324857e3fe6e5c62","local-member-attributes":"{Name:kubernetes-upgrade-729236 ClientURLs:[https://192.168.39.195:2379]}","request-path":"/0/members/324857e3fe6e5c62/attributes","cluster-id":"e260bcd32c6c8b35","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:20:15.707301Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:20:15.707407Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:20:15.707694Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:20:15.707729Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:20:15.708906Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:20:15.709189Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:20:15.71018Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.195:2379"}
	{"level":"info","ts":"2024-07-17T01:20:15.711232Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [490c4b648bde1647a9c296f85e78138ab334a3bee8be625dfb7e8c7dc1da51ff] <==
	{"level":"info","ts":"2024-07-17T01:19:51.588416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:19:51.588445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 received MsgPreVoteResp from 324857e3fe6e5c62 at term 3"}
	{"level":"info","ts":"2024-07-17T01:19:51.588459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became candidate at term 4"}
	{"level":"info","ts":"2024-07-17T01:19:51.588465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 received MsgVoteResp from 324857e3fe6e5c62 at term 4"}
	{"level":"info","ts":"2024-07-17T01:19:51.588517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became leader at term 4"}
	{"level":"info","ts":"2024-07-17T01:19:51.588527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 324857e3fe6e5c62 elected leader 324857e3fe6e5c62 at term 4"}
	{"level":"info","ts":"2024-07-17T01:19:51.591852Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"324857e3fe6e5c62","local-member-attributes":"{Name:kubernetes-upgrade-729236 ClientURLs:[https://192.168.39.195:2379]}","request-path":"/0/members/324857e3fe6e5c62/attributes","cluster-id":"e260bcd32c6c8b35","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:19:51.591899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:19:51.59557Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:19:51.596301Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:19:51.601561Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.195:2379"}
	{"level":"info","ts":"2024-07-17T01:19:51.602365Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:19:51.609462Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:19:51.610889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:19:51.612273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:20:01.787663Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T01:20:01.787783Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-729236","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	{"level":"warn","ts":"2024-07-17T01:20:01.787863Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:20:01.788019Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:20:01.818447Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:20:01.81855Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T01:20:01.820046Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"324857e3fe6e5c62"}
	{"level":"info","ts":"2024-07-17T01:20:01.822905Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-17T01:20:01.823037Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-17T01:20:01.823085Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-729236","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	
	
	==> kernel <==
	 01:20:22 up 1 min,  0 users,  load average: 1.86, 0.65, 0.23
	Linux kubernetes-upgrade-729236 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f852a277ff1e7316e0ce8bfba6923032f119c74fe53f5208ae327118ca5bedc] <==
	I0717 01:20:17.140876       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0717 01:20:17.141019       1 aggregator.go:171] initial CRD sync complete...
	I0717 01:20:17.141043       1 autoregister_controller.go:144] Starting autoregister controller
	I0717 01:20:17.141065       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:20:17.141409       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:20:17.155464       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 01:20:17.157587       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:20:17.166270       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:20:17.166343       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 01:20:17.166351       1 policy_source.go:224] refreshing policies
	I0717 01:20:17.166398       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0717 01:20:17.239917       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:20:17.248252       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:20:17.248665       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0717 01:20:17.248736       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0717 01:20:17.254339       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 01:20:17.254897       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 01:20:18.055059       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:20:19.658822       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:20:19.679143       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:20:19.744426       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:20:19.801628       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:20:19.810302       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:20:21.458958       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:20:21.617363       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4814430a055ce45aeccb84e423a1f16db138818b1f955e0f0282af2b7379aa6e] <==
	W0717 01:20:11.157006       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.157121       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.170063       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.183984       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.225093       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.282064       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.329465       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.340217       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.361788       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.397983       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.450463       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.501903       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.531774       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.545350       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.546733       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.550072       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.565741       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.591322       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.709218       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.715694       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.739774       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.753343       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.789401       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.816800       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:20:11.884811       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b32b5d7d9bad3a1554046a6d33479cb76c33a1bfcb4bbeedbfabf269fbe09d01] <==
	I0717 01:20:21.558542       1 shared_informer.go:320] Caches are synced for PV protection
	I0717 01:20:21.572191       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"kubernetes-upgrade-729236\" does not exist"
	I0717 01:20:21.572338       1 shared_informer.go:320] Caches are synced for taint
	I0717 01:20:21.572442       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 01:20:21.572663       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-729236"
	I0717 01:20:21.572736       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 01:20:21.595929       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 01:20:21.596199       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-729236"
	I0717 01:20:21.600934       1 shared_informer.go:320] Caches are synced for GC
	I0717 01:20:21.619425       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:20:21.639721       1 shared_informer.go:320] Caches are synced for TTL
	I0717 01:20:21.642867       1 shared_informer.go:320] Caches are synced for node
	I0717 01:20:21.643010       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0717 01:20:21.643083       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0717 01:20:21.643110       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0717 01:20:21.643119       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0717 01:20:21.643223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-729236"
	I0717 01:20:21.646231       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:20:21.655367       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 01:20:21.657583       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:20:21.658315       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:20:21.660024       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 01:20:21.665768       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 01:20:21.674350       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 01:20:21.706249       1 shared_informer.go:320] Caches are synced for resource quota
	
	
	==> kube-controller-manager [eec846700ed08fbf020709d28292c7cdb05c4bd5843dd76f4b74636e781cc74b] <==
	I0717 01:19:58.054138       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 01:19:58.054213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-729236"
	I0717 01:19:58.054147       1 shared_informer.go:320] Caches are synced for job
	I0717 01:19:58.060443       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 01:19:58.067903       1 shared_informer.go:320] Caches are synced for disruption
	I0717 01:19:58.071696       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 01:19:58.078734       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0717 01:19:58.078868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="73.705µs"
	I0717 01:19:58.079969       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 01:19:58.083228       1 shared_informer.go:320] Caches are synced for taint
	I0717 01:19:58.083363       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 01:19:58.083435       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-729236"
	I0717 01:19:58.083537       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 01:19:58.086068       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 01:19:58.102018       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 01:19:58.102062       1 shared_informer.go:320] Caches are synced for HPA
	I0717 01:19:58.104422       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 01:19:58.105787       1 shared_informer.go:320] Caches are synced for endpoint
	I0717 01:19:58.126794       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0717 01:19:58.136190       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:19:58.166812       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:19:58.185949       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:19:58.204412       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:19:58.204450       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:20:01.435367       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="190.138µs"
	
	
	==> kube-proxy [3ab9d3bd6ced02e79ef19e44fa25b950a22b45b951e9f9b793d381d373f1d59d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 01:20:18.554680       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 01:20:18.582570       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0717 01:20:18.582653       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 01:20:18.653420       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 01:20:18.653584       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:20:18.653624       1 server_linux.go:170] "Using iptables Proxier"
	I0717 01:20:18.656737       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 01:20:18.657007       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 01:20:18.657051       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:20:18.659675       1 config.go:197] "Starting service config controller"
	I0717 01:20:18.659714       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:20:18.659734       1 config.go:104] "Starting endpoint slice config controller"
	I0717 01:20:18.659738       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:20:18.660292       1 config.go:326] "Starting node config controller"
	I0717 01:20:18.660328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:20:18.760807       1 shared_informer.go:320] Caches are synced for node config
	I0717 01:20:18.760851       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:20:18.760905       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a71932bab3f722e6f5867c255b3dafe8c6362cf6020b99ce0c4743c81a31d73d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 01:19:52.422727       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 01:19:53.681800       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0717 01:19:53.682140       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 01:19:53.768251       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 01:19:53.768331       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:19:53.768362       1 server_linux.go:170] "Using iptables Proxier"
	I0717 01:19:53.776286       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 01:19:53.777217       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 01:19:53.777359       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:19:53.779260       1 config.go:104] "Starting endpoint slice config controller"
	I0717 01:19:53.779293       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:19:53.779341       1 config.go:197] "Starting service config controller"
	I0717 01:19:53.779345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:19:53.790597       1 config.go:326] "Starting node config controller"
	I0717 01:19:53.790625       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:19:53.879981       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:19:53.880059       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:19:53.891300       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [47984958b96fa41b9f990958cad40a5e2bdb5f6fd52bcb610ae48839ddede51f] <==
	W0717 01:20:17.125871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:20:17.125902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.125965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:20:17.125995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.126053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:20:17.126086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.126147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 01:20:17.126179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.128608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:20:17.128676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.128748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 01:20:17.128778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.128841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:20:17.128880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.128951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:20:17.128981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.129043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 01:20:17.129087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.129163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 01:20:17.129194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 01:20:17.154923       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:20:17.155083       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0717 01:20:17.174129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:20:17.174235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0717 01:20:22.977575       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d495543c156c1af0167455ceb1c6476f1f25a369251ead6dc487020ec8ad2ea2] <==
	
	
	==> kubelet <==
	Jul 17 01:20:13 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:13.988891    4556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e850d1f77388a2c8aa82a3b26f74e356-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-729236\" (UID: \"e850d1f77388a2c8aa82a3b26f74e356\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-729236"
	Jul 17 01:20:13 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:13.988906    4556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7234cd4a185d3ef5ad8fb457f13137e1-etcd-certs\") pod \"etcd-kubernetes-upgrade-729236\" (UID: \"7234cd4a185d3ef5ad8fb457f13137e1\") " pod="kube-system/etcd-kubernetes-upgrade-729236"
	Jul 17 01:20:13 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:13.988920    4556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/578e3a82bd28d3c6e10086b804f01a3e-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-729236\" (UID: \"578e3a82bd28d3c6e10086b804f01a3e\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-729236"
	Jul 17 01:20:13 kubernetes-upgrade-729236 kubelet[4556]: E0717 01:20:13.989553    4556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-729236?timeout=10s\": dial tcp 192.168.39.195:8443: connect: connection refused" interval="400ms"
	Jul 17 01:20:14 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:14.089682    4556 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-729236"
	Jul 17 01:20:14 kubernetes-upgrade-729236 kubelet[4556]: E0717 01:20:14.091730    4556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.195:8443: connect: connection refused" node="kubernetes-upgrade-729236"
	Jul 17 01:20:14 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:14.235750    4556 scope.go:117] "RemoveContainer" containerID="490c4b648bde1647a9c296f85e78138ab334a3bee8be625dfb7e8c7dc1da51ff"
	Jul 17 01:20:14 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:14.237839    4556 scope.go:117] "RemoveContainer" containerID="eec846700ed08fbf020709d28292c7cdb05c4bd5843dd76f4b74636e781cc74b"
	Jul 17 01:20:14 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:14.239707    4556 scope.go:117] "RemoveContainer" containerID="4814430a055ce45aeccb84e423a1f16db138818b1f955e0f0282af2b7379aa6e"
	Jul 17 01:20:14 kubernetes-upgrade-729236 kubelet[4556]: E0717 01:20:14.391394    4556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-729236?timeout=10s\": dial tcp 192.168.39.195:8443: connect: connection refused" interval="800ms"
	Jul 17 01:20:14 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:14.493785    4556 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-729236"
	Jul 17 01:20:14 kubernetes-upgrade-729236 kubelet[4556]: E0717 01:20:14.494638    4556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.195:8443: connect: connection refused" node="kubernetes-upgrade-729236"
	Jul 17 01:20:15 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:15.296901    4556 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-729236"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.217301    4556 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-729236"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.217794    4556 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-729236"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.217882    4556 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.218989    4556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.763355    4556 apiserver.go:52] "Watching apiserver"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.786044    4556 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.842685    4556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2484fdd1-6708-460c-b752-71518a2a8fc4-tmp\") pod \"storage-provisioner\" (UID: \"2484fdd1-6708-460c-b752-71518a2a8fc4\") " pod="kube-system/storage-provisioner"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.843154    4556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56f909da-6f89-430a-bdd4-132e0e4dcf7d-lib-modules\") pod \"kube-proxy-x64th\" (UID: \"56f909da-6f89-430a-bdd4-132e0e4dcf7d\") " pod="kube-system/kube-proxy-x64th"
	Jul 17 01:20:17 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:17.843798    4556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56f909da-6f89-430a-bdd4-132e0e4dcf7d-xtables-lock\") pod \"kube-proxy-x64th\" (UID: \"56f909da-6f89-430a-bdd4-132e0e4dcf7d\") " pod="kube-system/kube-proxy-x64th"
	Jul 17 01:20:18 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:18.073834    4556 scope.go:117] "RemoveContainer" containerID="a71932bab3f722e6f5867c255b3dafe8c6362cf6020b99ce0c4743c81a31d73d"
	Jul 17 01:20:18 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:18.078953    4556 scope.go:117] "RemoveContainer" containerID="9e3f73754e1198c1cbbe82f707a8983559ebe56e72757a4839d22ed537f9fc17"
	Jul 17 01:20:18 kubernetes-upgrade-729236 kubelet[4556]: I0717 01:20:18.082240    4556 scope.go:117] "RemoveContainer" containerID="229c37e05607e76a56bd9cdd66954e9144e2a481ff866a2e3f44e78a4c0b143a"
	
	
	==> storage-provisioner [229c37e05607e76a56bd9cdd66954e9144e2a481ff866a2e3f44e78a4c0b143a] <==
	I0717 01:19:50.687002       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:19:53.675969       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:19:53.677294       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:19:53.702448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:19:53.702730       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-729236_826538fe-2f80-4ceb-b90c-b7dfb5e5b8d2!
	I0717 01:19:53.703278       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9840dd0e-2b1c-4ede-9812-5e5585853c56", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-729236_826538fe-2f80-4ceb-b90c-b7dfb5e5b8d2 became leader
	I0717 01:19:53.803225       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-729236_826538fe-2f80-4ceb-b90c-b7dfb5e5b8d2!
	
	
	==> storage-provisioner [c838cca9bd2305d8df2e156f6fe6b8c948ddd9b21dcf37536fe40254b95b659a] <==
	I0717 01:20:18.448230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:20:18.495042       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:20:18.495281       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:20:21.486854   64227 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19265-12897/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-729236 -n kubernetes-upgrade-729236
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-729236 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-729236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-729236
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-729236: (1.375481254s)
--- FAIL: TestKubernetesUpgrade (414.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (318.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-249342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-249342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m18.010520678s)

                                                
                                                
-- stdout --
	* [old-k8s-version-249342] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-249342" primary control-plane node in "old-k8s-version-249342" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:13:31.856766   55687 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:13:31.856850   55687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:13:31.856870   55687 out.go:304] Setting ErrFile to fd 2...
	I0717 01:13:31.856879   55687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:13:31.857115   55687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:13:31.857654   55687 out.go:298] Setting JSON to false
	I0717 01:13:31.858467   55687 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6961,"bootTime":1721171851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:13:31.858544   55687 start.go:139] virtualization: kvm guest
	I0717 01:13:31.861580   55687 out.go:177] * [old-k8s-version-249342] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:13:31.863239   55687 notify.go:220] Checking for updates...
	I0717 01:13:31.864665   55687 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:13:31.866816   55687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:13:31.869106   55687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:13:31.871171   55687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:13:31.873512   55687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:13:31.876958   55687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:13:31.878361   55687 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:13:31.918966   55687 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:13:31.920115   55687 start.go:297] selected driver: kvm2
	I0717 01:13:31.920129   55687 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:13:31.920142   55687 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:13:31.920849   55687 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:13:31.947538   55687 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:13:31.965354   55687 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:13:31.965406   55687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:13:31.965686   55687 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:13:31.965728   55687 cni.go:84] Creating CNI manager for ""
	I0717 01:13:31.965739   55687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:13:31.965751   55687 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 01:13:31.965942   55687 start.go:340] cluster config:
	{Name:old-k8s-version-249342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:13:31.966086   55687 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:13:31.967827   55687 out.go:177] * Starting "old-k8s-version-249342" primary control-plane node in "old-k8s-version-249342" cluster
	I0717 01:13:31.968949   55687 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:13:31.968991   55687 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:13:31.969004   55687 cache.go:56] Caching tarball of preloaded images
	I0717 01:13:31.969089   55687 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:13:31.969102   55687 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:13:31.969471   55687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/config.json ...
	I0717 01:13:31.969498   55687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/config.json: {Name:mkab53322a0ec8a24a4992198fdf82ca1141cd43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:13:31.969644   55687 start.go:360] acquireMachinesLock for old-k8s-version-249342: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:14:20.333452   55687 start.go:364] duration metric: took 48.363766516s to acquireMachinesLock for "old-k8s-version-249342"
	I0717 01:14:20.333520   55687 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-249342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:14:20.333644   55687 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:14:20.335425   55687 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 01:14:20.335622   55687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:14:20.335677   55687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:14:20.351952   55687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0717 01:14:20.352363   55687 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:14:20.352960   55687 main.go:141] libmachine: Using API Version  1
	I0717 01:14:20.352983   55687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:14:20.353403   55687 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:14:20.353609   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetMachineName
	I0717 01:14:20.353783   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:14:20.353954   55687 start.go:159] libmachine.API.Create for "old-k8s-version-249342" (driver="kvm2")
	I0717 01:14:20.353981   55687 client.go:168] LocalClient.Create starting
	I0717 01:14:20.354020   55687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 01:14:20.354056   55687 main.go:141] libmachine: Decoding PEM data...
	I0717 01:14:20.354077   55687 main.go:141] libmachine: Parsing certificate...
	I0717 01:14:20.354141   55687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 01:14:20.354166   55687 main.go:141] libmachine: Decoding PEM data...
	I0717 01:14:20.354188   55687 main.go:141] libmachine: Parsing certificate...
	I0717 01:14:20.354214   55687 main.go:141] libmachine: Running pre-create checks...
	I0717 01:14:20.354227   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .PreCreateCheck
	I0717 01:14:20.354610   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetConfigRaw
	I0717 01:14:20.354989   55687 main.go:141] libmachine: Creating machine...
	I0717 01:14:20.355004   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .Create
	I0717 01:14:20.355146   55687 main.go:141] libmachine: (old-k8s-version-249342) Creating KVM machine...
	I0717 01:14:20.356417   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found existing default KVM network
	I0717 01:14:20.357603   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:20.357435   58445 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:f4:2a} reservation:<nil>}
	I0717 01:14:20.358485   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:20.358399   58445 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:dd:40:b1} reservation:<nil>}
	I0717 01:14:20.359366   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:20.359290   58445 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028ec70}
	I0717 01:14:20.359394   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | created network xml: 
	I0717 01:14:20.359412   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | <network>
	I0717 01:14:20.359425   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |   <name>mk-old-k8s-version-249342</name>
	I0717 01:14:20.359440   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |   <dns enable='no'/>
	I0717 01:14:20.359712   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |   
	I0717 01:14:20.359735   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0717 01:14:20.359746   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |     <dhcp>
	I0717 01:14:20.359763   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0717 01:14:20.359773   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |     </dhcp>
	I0717 01:14:20.359784   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |   </ip>
	I0717 01:14:20.359793   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG |   
	I0717 01:14:20.359804   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | </network>
	I0717 01:14:20.359816   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | 
	I0717 01:14:20.364536   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | trying to create private KVM network mk-old-k8s-version-249342 192.168.61.0/24...
	I0717 01:14:20.432165   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | private KVM network mk-old-k8s-version-249342 192.168.61.0/24 created
	I0717 01:14:20.432200   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:20.432130   58445 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:14:20.432216   55687 main.go:141] libmachine: (old-k8s-version-249342) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342 ...
	I0717 01:14:20.432234   55687 main.go:141] libmachine: (old-k8s-version-249342) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 01:14:20.432373   55687 main.go:141] libmachine: (old-k8s-version-249342) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 01:14:20.681979   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:20.681861   58445 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa...
	I0717 01:14:20.992922   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:20.992830   58445 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/old-k8s-version-249342.rawdisk...
	I0717 01:14:20.992948   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Writing magic tar header
	I0717 01:14:20.992960   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Writing SSH key tar header
	I0717 01:14:20.993066   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:20.992975   58445 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342 ...
	I0717 01:14:20.993117   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342
	I0717 01:14:20.993141   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 01:14:20.993153   55687 main.go:141] libmachine: (old-k8s-version-249342) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342 (perms=drwx------)
	I0717 01:14:20.993521   55687 main.go:141] libmachine: (old-k8s-version-249342) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:14:20.993551   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:14:20.993562   55687 main.go:141] libmachine: (old-k8s-version-249342) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 01:14:20.993582   55687 main.go:141] libmachine: (old-k8s-version-249342) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 01:14:20.993598   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 01:14:20.993812   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:14:20.993876   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:14:20.993890   55687 main.go:141] libmachine: (old-k8s-version-249342) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:14:20.993908   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Checking permissions on dir: /home
	I0717 01:14:20.993943   55687 main.go:141] libmachine: (old-k8s-version-249342) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:14:20.993984   55687 main.go:141] libmachine: (old-k8s-version-249342) Creating domain...
	I0717 01:14:20.993996   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Skipping /home - not owner
	I0717 01:14:20.995011   55687 main.go:141] libmachine: (old-k8s-version-249342) define libvirt domain using xml: 
	I0717 01:14:20.995032   55687 main.go:141] libmachine: (old-k8s-version-249342) <domain type='kvm'>
	I0717 01:14:20.995044   55687 main.go:141] libmachine: (old-k8s-version-249342)   <name>old-k8s-version-249342</name>
	I0717 01:14:20.995054   55687 main.go:141] libmachine: (old-k8s-version-249342)   <memory unit='MiB'>2200</memory>
	I0717 01:14:20.995064   55687 main.go:141] libmachine: (old-k8s-version-249342)   <vcpu>2</vcpu>
	I0717 01:14:20.995071   55687 main.go:141] libmachine: (old-k8s-version-249342)   <features>
	I0717 01:14:20.995202   55687 main.go:141] libmachine: (old-k8s-version-249342)     <acpi/>
	I0717 01:14:20.995221   55687 main.go:141] libmachine: (old-k8s-version-249342)     <apic/>
	I0717 01:14:20.995232   55687 main.go:141] libmachine: (old-k8s-version-249342)     <pae/>
	I0717 01:14:20.995244   55687 main.go:141] libmachine: (old-k8s-version-249342)     
	I0717 01:14:20.995254   55687 main.go:141] libmachine: (old-k8s-version-249342)   </features>
	I0717 01:14:20.995281   55687 main.go:141] libmachine: (old-k8s-version-249342)   <cpu mode='host-passthrough'>
	I0717 01:14:20.995301   55687 main.go:141] libmachine: (old-k8s-version-249342)   
	I0717 01:14:20.995311   55687 main.go:141] libmachine: (old-k8s-version-249342)   </cpu>
	I0717 01:14:20.995320   55687 main.go:141] libmachine: (old-k8s-version-249342)   <os>
	I0717 01:14:20.995338   55687 main.go:141] libmachine: (old-k8s-version-249342)     <type>hvm</type>
	I0717 01:14:20.995348   55687 main.go:141] libmachine: (old-k8s-version-249342)     <boot dev='cdrom'/>
	I0717 01:14:20.995360   55687 main.go:141] libmachine: (old-k8s-version-249342)     <boot dev='hd'/>
	I0717 01:14:20.995371   55687 main.go:141] libmachine: (old-k8s-version-249342)     <bootmenu enable='no'/>
	I0717 01:14:20.995379   55687 main.go:141] libmachine: (old-k8s-version-249342)   </os>
	I0717 01:14:20.995392   55687 main.go:141] libmachine: (old-k8s-version-249342)   <devices>
	I0717 01:14:20.995419   55687 main.go:141] libmachine: (old-k8s-version-249342)     <disk type='file' device='cdrom'>
	I0717 01:14:20.995448   55687 main.go:141] libmachine: (old-k8s-version-249342)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/boot2docker.iso'/>
	I0717 01:14:20.995472   55687 main.go:141] libmachine: (old-k8s-version-249342)       <target dev='hdc' bus='scsi'/>
	I0717 01:14:20.995484   55687 main.go:141] libmachine: (old-k8s-version-249342)       <readonly/>
	I0717 01:14:20.995495   55687 main.go:141] libmachine: (old-k8s-version-249342)     </disk>
	I0717 01:14:20.995518   55687 main.go:141] libmachine: (old-k8s-version-249342)     <disk type='file' device='disk'>
	I0717 01:14:20.995535   55687 main.go:141] libmachine: (old-k8s-version-249342)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:14:20.995559   55687 main.go:141] libmachine: (old-k8s-version-249342)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/old-k8s-version-249342.rawdisk'/>
	I0717 01:14:20.995573   55687 main.go:141] libmachine: (old-k8s-version-249342)       <target dev='hda' bus='virtio'/>
	I0717 01:14:20.995585   55687 main.go:141] libmachine: (old-k8s-version-249342)     </disk>
	I0717 01:14:20.995596   55687 main.go:141] libmachine: (old-k8s-version-249342)     <interface type='network'>
	I0717 01:14:20.995610   55687 main.go:141] libmachine: (old-k8s-version-249342)       <source network='mk-old-k8s-version-249342'/>
	I0717 01:14:20.995623   55687 main.go:141] libmachine: (old-k8s-version-249342)       <model type='virtio'/>
	I0717 01:14:20.995640   55687 main.go:141] libmachine: (old-k8s-version-249342)     </interface>
	I0717 01:14:20.995654   55687 main.go:141] libmachine: (old-k8s-version-249342)     <interface type='network'>
	I0717 01:14:20.995667   55687 main.go:141] libmachine: (old-k8s-version-249342)       <source network='default'/>
	I0717 01:14:20.995679   55687 main.go:141] libmachine: (old-k8s-version-249342)       <model type='virtio'/>
	I0717 01:14:20.995690   55687 main.go:141] libmachine: (old-k8s-version-249342)     </interface>
	I0717 01:14:20.995705   55687 main.go:141] libmachine: (old-k8s-version-249342)     <serial type='pty'>
	I0717 01:14:20.995723   55687 main.go:141] libmachine: (old-k8s-version-249342)       <target port='0'/>
	I0717 01:14:20.995736   55687 main.go:141] libmachine: (old-k8s-version-249342)     </serial>
	I0717 01:14:20.995748   55687 main.go:141] libmachine: (old-k8s-version-249342)     <console type='pty'>
	I0717 01:14:20.995762   55687 main.go:141] libmachine: (old-k8s-version-249342)       <target type='serial' port='0'/>
	I0717 01:14:20.995774   55687 main.go:141] libmachine: (old-k8s-version-249342)     </console>
	I0717 01:14:20.995787   55687 main.go:141] libmachine: (old-k8s-version-249342)     <rng model='virtio'>
	I0717 01:14:20.995805   55687 main.go:141] libmachine: (old-k8s-version-249342)       <backend model='random'>/dev/random</backend>
	I0717 01:14:20.995823   55687 main.go:141] libmachine: (old-k8s-version-249342)     </rng>
	I0717 01:14:20.995834   55687 main.go:141] libmachine: (old-k8s-version-249342)     
	I0717 01:14:20.995844   55687 main.go:141] libmachine: (old-k8s-version-249342)     
	I0717 01:14:20.995856   55687 main.go:141] libmachine: (old-k8s-version-249342)   </devices>
	I0717 01:14:20.995868   55687 main.go:141] libmachine: (old-k8s-version-249342) </domain>
	I0717 01:14:20.995885   55687 main.go:141] libmachine: (old-k8s-version-249342) 
	I0717 01:14:21.002375   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:df:b8:fc in network default
	I0717 01:14:21.002981   55687 main.go:141] libmachine: (old-k8s-version-249342) Ensuring networks are active...
	I0717 01:14:21.002997   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:21.003746   55687 main.go:141] libmachine: (old-k8s-version-249342) Ensuring network default is active
	I0717 01:14:21.004097   55687 main.go:141] libmachine: (old-k8s-version-249342) Ensuring network mk-old-k8s-version-249342 is active
	I0717 01:14:21.004653   55687 main.go:141] libmachine: (old-k8s-version-249342) Getting domain xml...
	I0717 01:14:21.005432   55687 main.go:141] libmachine: (old-k8s-version-249342) Creating domain...
	I0717 01:14:22.290594   55687 main.go:141] libmachine: (old-k8s-version-249342) Waiting to get IP...
	I0717 01:14:22.291489   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:22.291906   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:22.291954   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:22.291884   58445 retry.go:31] will retry after 304.227969ms: waiting for machine to come up
	I0717 01:14:22.598358   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:22.598912   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:22.598936   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:22.598869   58445 retry.go:31] will retry after 346.643451ms: waiting for machine to come up
	I0717 01:14:22.947506   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:22.947996   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:22.948021   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:22.947927   58445 retry.go:31] will retry after 459.147578ms: waiting for machine to come up
	I0717 01:14:23.408605   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:23.409176   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:23.409203   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:23.409147   58445 retry.go:31] will retry after 570.096886ms: waiting for machine to come up
	I0717 01:14:23.980249   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:23.980798   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:23.980822   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:23.980768   58445 retry.go:31] will retry after 643.263218ms: waiting for machine to come up
	I0717 01:14:24.625732   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:24.626414   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:24.626439   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:24.626378   58445 retry.go:31] will retry after 709.097588ms: waiting for machine to come up
	I0717 01:14:25.337450   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:25.337880   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:25.337910   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:25.337830   58445 retry.go:31] will retry after 777.716698ms: waiting for machine to come up
	I0717 01:14:26.116980   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:26.117361   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:26.117393   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:26.117311   58445 retry.go:31] will retry after 1.042626118s: waiting for machine to come up
	I0717 01:14:27.161148   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:27.161599   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:27.161631   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:27.161554   58445 retry.go:31] will retry after 1.326117277s: waiting for machine to come up
	I0717 01:14:28.489933   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:28.490460   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:28.490487   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:28.490415   58445 retry.go:31] will retry after 2.257564527s: waiting for machine to come up
	I0717 01:14:30.749563   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:30.750079   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:30.750107   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:30.750045   58445 retry.go:31] will retry after 1.939918369s: waiting for machine to come up
	I0717 01:14:32.692261   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:32.692649   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:32.692688   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:32.692601   58445 retry.go:31] will retry after 2.861672699s: waiting for machine to come up
	I0717 01:14:35.556129   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:35.556484   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:35.556506   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:35.556432   58445 retry.go:31] will retry after 2.875372571s: waiting for machine to come up
	I0717 01:14:38.435321   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:38.435736   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:14:38.435765   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:14:38.435694   58445 retry.go:31] will retry after 4.687464094s: waiting for machine to come up
	I0717 01:14:43.125562   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.126065   55687 main.go:141] libmachine: (old-k8s-version-249342) Found IP for machine: 192.168.61.13
	I0717 01:14:43.126111   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has current primary IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.126131   55687 main.go:141] libmachine: (old-k8s-version-249342) Reserving static IP address...
	I0717 01:14:43.126498   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-249342", mac: "52:54:00:f3:5b:b9", ip: "192.168.61.13"} in network mk-old-k8s-version-249342
	I0717 01:14:43.195806   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Getting to WaitForSSH function...
	I0717 01:14:43.195831   55687 main.go:141] libmachine: (old-k8s-version-249342) Reserved static IP address: 192.168.61.13
	I0717 01:14:43.195845   55687 main.go:141] libmachine: (old-k8s-version-249342) Waiting for SSH to be available...
	I0717 01:14:43.198471   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.198892   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:43.198923   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.199175   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Using SSH client type: external
	I0717 01:14:43.199203   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa (-rw-------)
	I0717 01:14:43.199256   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:14:43.199270   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | About to run SSH command:
	I0717 01:14:43.199282   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | exit 0
	I0717 01:14:43.320542   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | SSH cmd err, output: <nil>: 
	I0717 01:14:43.320827   55687 main.go:141] libmachine: (old-k8s-version-249342) KVM machine creation complete!
	I0717 01:14:43.321101   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetConfigRaw
	I0717 01:14:43.321579   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:14:43.321756   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:14:43.321933   55687 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:14:43.321948   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetState
	I0717 01:14:43.323355   55687 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:14:43.323368   55687 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:14:43.323373   55687 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:14:43.323379   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:43.325557   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.325900   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:43.325945   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.326204   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:43.326370   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.326545   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.326682   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:43.326828   55687 main.go:141] libmachine: Using SSH client type: native
	I0717 01:14:43.327024   55687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:14:43.327039   55687 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:14:43.427583   55687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:14:43.427604   55687 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:14:43.427612   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:43.430440   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.430734   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:43.430768   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.430933   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:43.431110   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.431293   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.431414   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:43.431567   55687 main.go:141] libmachine: Using SSH client type: native
	I0717 01:14:43.431786   55687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:14:43.431801   55687 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:14:43.533610   55687 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:14:43.533675   55687 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:14:43.533687   55687 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:14:43.533700   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetMachineName
	I0717 01:14:43.534020   55687 buildroot.go:166] provisioning hostname "old-k8s-version-249342"
	I0717 01:14:43.534052   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetMachineName
	I0717 01:14:43.534247   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:43.536889   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.537301   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:43.537326   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.537543   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:43.537768   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.537953   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.538092   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:43.538274   55687 main.go:141] libmachine: Using SSH client type: native
	I0717 01:14:43.538522   55687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:14:43.538542   55687 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-249342 && echo "old-k8s-version-249342" | sudo tee /etc/hostname
	I0717 01:14:43.657709   55687 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-249342
	
	I0717 01:14:43.657733   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:43.660665   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.661035   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:43.661064   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.661217   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:43.661418   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.661567   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.661702   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:43.661962   55687 main.go:141] libmachine: Using SSH client type: native
	I0717 01:14:43.662136   55687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:14:43.662154   55687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-249342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-249342/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-249342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:14:43.773499   55687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:14:43.773531   55687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:14:43.773576   55687 buildroot.go:174] setting up certificates
	I0717 01:14:43.773598   55687 provision.go:84] configureAuth start
	I0717 01:14:43.773612   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetMachineName
	I0717 01:14:43.773887   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetIP
	I0717 01:14:43.776481   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.776910   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:43.776938   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.777034   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:43.779091   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.779391   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:43.779421   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.779552   55687 provision.go:143] copyHostCerts
	I0717 01:14:43.779611   55687 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:14:43.779623   55687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:14:43.779679   55687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:14:43.779767   55687 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:14:43.779776   55687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:14:43.779797   55687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:14:43.779845   55687 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:14:43.779852   55687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:14:43.779874   55687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:14:43.779915   55687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-249342 san=[127.0.0.1 192.168.61.13 localhost minikube old-k8s-version-249342]
	I0717 01:14:43.849018   55687 provision.go:177] copyRemoteCerts
	I0717 01:14:43.849070   55687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:14:43.849100   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:43.851463   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.851706   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:43.851748   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:43.851871   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:43.852054   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:43.852211   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:43.852345   55687 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa Username:docker}
	I0717 01:14:43.934225   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:14:43.959759   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:14:43.985639   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:14:44.012549   55687 provision.go:87] duration metric: took 238.936027ms to configureAuth
	I0717 01:14:44.012596   55687 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:14:44.012801   55687 config.go:182] Loaded profile config "old-k8s-version-249342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:14:44.012898   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:44.015645   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.016046   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:44.016072   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.016223   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:44.016413   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:44.016552   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:44.016683   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:44.016870   55687 main.go:141] libmachine: Using SSH client type: native
	I0717 01:14:44.017062   55687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:14:44.017085   55687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:14:44.276745   55687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:14:44.276766   55687 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:14:44.276774   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetURL
	I0717 01:14:44.278017   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | Using libvirt version 6000000
	I0717 01:14:44.279902   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.280367   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:44.280395   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.280604   55687 main.go:141] libmachine: Docker is up and running!
	I0717 01:14:44.280620   55687 main.go:141] libmachine: Reticulating splines...
	I0717 01:14:44.280628   55687 client.go:171] duration metric: took 23.926639605s to LocalClient.Create
	I0717 01:14:44.280649   55687 start.go:167] duration metric: took 23.926697415s to libmachine.API.Create "old-k8s-version-249342"
	I0717 01:14:44.280658   55687 start.go:293] postStartSetup for "old-k8s-version-249342" (driver="kvm2")
	I0717 01:14:44.280667   55687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:14:44.280687   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:14:44.280910   55687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:14:44.280934   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:44.283134   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.283415   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:44.283441   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.283576   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:44.283746   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:44.283899   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:44.284018   55687 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa Username:docker}
	I0717 01:14:44.362811   55687 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:14:44.366958   55687 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:14:44.366979   55687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:14:44.367033   55687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:14:44.367105   55687 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:14:44.367180   55687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:14:44.376456   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:14:44.399892   55687 start.go:296] duration metric: took 119.220991ms for postStartSetup
	I0717 01:14:44.399939   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetConfigRaw
	I0717 01:14:44.400629   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetIP
	I0717 01:14:44.403762   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.404153   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:44.404176   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.404432   55687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/config.json ...
	I0717 01:14:44.404648   55687 start.go:128] duration metric: took 24.070992233s to createHost
	I0717 01:14:44.404674   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:44.406995   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.407362   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:44.407391   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.407494   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:44.407678   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:44.407828   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:44.408012   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:44.408192   55687 main.go:141] libmachine: Using SSH client type: native
	I0717 01:14:44.408350   55687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:14:44.408365   55687 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 01:14:44.509495   55687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721178884.480448195
	
	I0717 01:14:44.509523   55687 fix.go:216] guest clock: 1721178884.480448195
	I0717 01:14:44.509533   55687 fix.go:229] Guest: 2024-07-17 01:14:44.480448195 +0000 UTC Remote: 2024-07-17 01:14:44.404661268 +0000 UTC m=+72.595309457 (delta=75.786927ms)
	I0717 01:14:44.509551   55687 fix.go:200] guest clock delta is within tolerance: 75.786927ms
	I0717 01:14:44.509555   55687 start.go:83] releasing machines lock for "old-k8s-version-249342", held for 24.176075235s
	I0717 01:14:44.509579   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:14:44.509892   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetIP
	I0717 01:14:44.513234   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.513677   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:44.513713   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.513885   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:14:44.514442   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:14:44.514616   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:14:44.514702   55687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:14:44.514741   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:44.514822   55687 ssh_runner.go:195] Run: cat /version.json
	I0717 01:14:44.514845   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:14:44.517694   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.518059   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:44.518104   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.518127   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.518360   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:44.518497   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:44.518514   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:44.518547   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:44.518678   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:44.518794   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:14:44.518858   55687 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa Username:docker}
	I0717 01:14:44.518944   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:14:44.519090   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:14:44.519208   55687 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa Username:docker}
	I0717 01:14:44.602702   55687 ssh_runner.go:195] Run: systemctl --version
	I0717 01:14:44.629407   55687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:14:44.788605   55687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:14:44.794920   55687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:14:44.794993   55687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:14:44.811599   55687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:14:44.811626   55687 start.go:495] detecting cgroup driver to use...
	I0717 01:14:44.811681   55687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:14:44.832431   55687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:14:44.846755   55687 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:14:44.846811   55687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:14:44.862774   55687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:14:44.878158   55687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:14:45.009328   55687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:14:45.187310   55687 docker.go:233] disabling docker service ...
	I0717 01:14:45.187370   55687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:14:45.204041   55687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:14:45.225671   55687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:14:45.368019   55687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:14:45.494588   55687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:14:45.511411   55687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:14:45.532311   55687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:14:45.532375   55687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:14:45.543304   55687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:14:45.543373   55687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:14:45.554089   55687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:14:45.564645   55687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:14:45.575695   55687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:14:45.586414   55687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:14:45.595329   55687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:14:45.595382   55687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:14:45.608983   55687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:14:45.620442   55687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:14:45.758706   55687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:14:45.901295   55687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:14:45.901367   55687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:14:45.906300   55687 start.go:563] Will wait 60s for crictl version
	I0717 01:14:45.906365   55687 ssh_runner.go:195] Run: which crictl
	I0717 01:14:45.910229   55687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:14:45.949515   55687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:14:45.949594   55687 ssh_runner.go:195] Run: crio --version
	I0717 01:14:45.979033   55687 ssh_runner.go:195] Run: crio --version
	I0717 01:14:46.015438   55687 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:14:46.016829   55687 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetIP
	I0717 01:14:46.019620   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:46.020008   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:14:35 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:14:46.020045   55687 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:14:46.020342   55687 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:14:46.024863   55687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:14:46.039856   55687 kubeadm.go:883] updating cluster {Name:old-k8s-version-249342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:14:46.039988   55687 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:14:46.040052   55687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:14:46.080215   55687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:14:46.080280   55687 ssh_runner.go:195] Run: which lz4
	I0717 01:14:46.086306   55687 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 01:14:46.090568   55687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:14:46.090598   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:14:47.804350   55687 crio.go:462] duration metric: took 1.718080296s to copy over tarball
	I0717 01:14:47.804413   55687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:14:50.464065   55687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.659625388s)
	I0717 01:14:50.464095   55687 crio.go:469] duration metric: took 2.659719429s to extract the tarball
	I0717 01:14:50.464113   55687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:14:50.507765   55687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:14:50.553545   55687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:14:50.553570   55687 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:14:50.553631   55687 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:14:50.553683   55687 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:50.553721   55687 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:50.553743   55687 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:14:50.553753   55687 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:14:50.553697   55687 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:50.553966   55687 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:50.554113   55687 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:50.555422   55687 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:50.555465   55687 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:50.555418   55687 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:14:50.555483   55687 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:14:50.555818   55687 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:50.556045   55687 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:14:50.557653   55687 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:50.557861   55687 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:50.707517   55687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:50.714112   55687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:50.716387   55687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:14:50.716836   55687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:50.717179   55687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:50.717806   55687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:50.806756   55687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:14:50.842418   55687 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:14:50.842465   55687 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:50.842516   55687 ssh_runner.go:195] Run: which crictl
	I0717 01:14:50.843531   55687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:14:50.855988   55687 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:14:50.856039   55687 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:50.856087   55687 ssh_runner.go:195] Run: which crictl
	I0717 01:14:50.886426   55687 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:14:50.886482   55687 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:50.886486   55687 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:14:50.886522   55687 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:14:50.886574   55687 ssh_runner.go:195] Run: which crictl
	I0717 01:14:50.886531   55687 ssh_runner.go:195] Run: which crictl
	I0717 01:14:50.905650   55687 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:14:50.905694   55687 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:50.905737   55687 ssh_runner.go:195] Run: which crictl
	I0717 01:14:50.905735   55687 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:14:50.905808   55687 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:50.905872   55687 ssh_runner.go:195] Run: which crictl
	I0717 01:14:50.926835   55687 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:14:50.926904   55687 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:14:50.926943   55687 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:14:50.926985   55687 ssh_runner.go:195] Run: which crictl
	I0717 01:14:51.058978   55687 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:14:51.058995   55687 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:14:51.059018   55687 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:14:51.059056   55687 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:14:51.059078   55687 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:14:51.059117   55687 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:14:51.059167   55687 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:14:51.182357   55687 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:14:51.182562   55687 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:14:51.192856   55687 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:14:51.199079   55687 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:14:51.199166   55687 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:14:51.199201   55687 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:14:51.199384   55687 cache_images.go:92] duration metric: took 645.799066ms to LoadCachedImages
	W0717 01:14:51.199496   55687 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0717 01:14:51.199513   55687 kubeadm.go:934] updating node { 192.168.61.13 8443 v1.20.0 crio true true} ...
	I0717 01:14:51.199613   55687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-249342 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:14:51.199721   55687 ssh_runner.go:195] Run: crio config
	I0717 01:14:51.252799   55687 cni.go:84] Creating CNI manager for ""
	I0717 01:14:51.252822   55687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:14:51.252833   55687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:14:51.252853   55687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.13 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-249342 NodeName:old-k8s-version-249342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:14:51.252975   55687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-249342"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:14:51.253034   55687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:14:51.263135   55687 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:14:51.263196   55687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:14:51.272754   55687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:14:51.290299   55687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:14:51.306688   55687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:14:51.323305   55687 ssh_runner.go:195] Run: grep 192.168.61.13	control-plane.minikube.internal$ /etc/hosts
	I0717 01:14:51.327364   55687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:14:51.340000   55687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:14:51.477399   55687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:14:51.501538   55687 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342 for IP: 192.168.61.13
	I0717 01:14:51.501557   55687 certs.go:194] generating shared ca certs ...
	I0717 01:14:51.501570   55687 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:51.501712   55687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:14:51.501755   55687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:14:51.501765   55687 certs.go:256] generating profile certs ...
	I0717 01:14:51.501813   55687 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.key
	I0717 01:14:51.501826   55687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt with IP's: []
	I0717 01:14:51.663553   55687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt ...
	I0717 01:14:51.663582   55687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: {Name:mk50b234ca6aa8d20fc8f116d917da0e35c74c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:51.674605   55687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.key ...
	I0717 01:14:51.674638   55687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.key: {Name:mk8f35621fc744dc05cc3b37aa5a57b75c64bb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:51.674803   55687 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.key.e96f0644
	I0717 01:14:51.674825   55687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.crt.e96f0644 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.13]
	I0717 01:14:51.868191   55687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.crt.e96f0644 ...
	I0717 01:14:51.868227   55687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.crt.e96f0644: {Name:mkfe8a28afbecabc3ddd89a3505ea6e90f22b9bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:51.870968   55687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.key.e96f0644 ...
	I0717 01:14:51.870992   55687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.key.e96f0644: {Name:mk6daf674fb4d6b899a8d204451298c3000594f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:51.871112   55687 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.crt.e96f0644 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.crt
	I0717 01:14:51.871240   55687 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.key.e96f0644 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.key
	I0717 01:14:51.871302   55687 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.key
	I0717 01:14:51.871318   55687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.crt with IP's: []
	I0717 01:14:52.130944   55687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.crt ...
	I0717 01:14:52.130975   55687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.crt: {Name:mkd04f546994813080a3375d437db32002ad3f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:52.203181   55687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.key ...
	I0717 01:14:52.203236   55687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.key: {Name:mk30d88fa1a79ec104b7e5dec5e105d38dbb7601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:14:52.203520   55687 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:14:52.203567   55687 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:14:52.203583   55687 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:14:52.203611   55687 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:14:52.203639   55687 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:14:52.203674   55687 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:14:52.203731   55687 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:14:52.204521   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:14:52.236679   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:14:52.266887   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:14:52.297225   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:14:52.327898   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:14:52.360449   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:14:52.392545   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:14:52.431056   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:14:52.469723   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:14:52.497523   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:14:52.535894   55687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:14:52.567499   55687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:14:52.585023   55687 ssh_runner.go:195] Run: openssl version
	I0717 01:14:52.591187   55687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:14:52.602011   55687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:14:52.607288   55687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:14:52.607366   55687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:14:52.614928   55687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:14:52.626140   55687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:14:52.637309   55687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:14:52.641885   55687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:14:52.641940   55687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:14:52.649281   55687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:14:52.664425   55687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:14:52.676917   55687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:14:52.682428   55687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:14:52.682481   55687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:14:52.689884   55687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:14:52.702358   55687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:14:52.706704   55687 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:14:52.706756   55687 kubeadm.go:392] StartCluster: {Name:old-k8s-version-249342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:14:52.706842   55687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:14:52.706900   55687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:14:52.746988   55687 cri.go:89] found id: ""
	I0717 01:14:52.747066   55687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:14:52.757808   55687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:14:52.767695   55687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:14:52.777783   55687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:14:52.777807   55687 kubeadm.go:157] found existing configuration files:
	
	I0717 01:14:52.777863   55687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:14:52.787532   55687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:14:52.787596   55687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:14:52.799099   55687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:14:52.808968   55687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:14:52.809036   55687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:14:52.819522   55687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:14:52.828986   55687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:14:52.829048   55687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:14:52.839478   55687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:14:52.849030   55687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:14:52.849097   55687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:14:52.860527   55687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:14:52.985153   55687 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:14:52.985466   55687 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:14:53.143789   55687 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:14:53.143986   55687 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:14:53.144156   55687 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:14:53.417213   55687 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:14:53.464175   55687 out.go:204]   - Generating certificates and keys ...
	I0717 01:14:53.464289   55687 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:14:53.464382   55687 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:14:53.676180   55687 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:14:53.821092   55687 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:14:54.067743   55687 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:14:54.304922   55687 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:14:54.387572   55687 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:14:54.387781   55687 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-249342] and IPs [192.168.61.13 127.0.0.1 ::1]
	I0717 01:14:54.511743   55687 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:14:54.512309   55687 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-249342] and IPs [192.168.61.13 127.0.0.1 ::1]
	I0717 01:14:54.617883   55687 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:14:54.802200   55687 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:14:55.229352   55687 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:14:55.229567   55687 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:14:55.490619   55687 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:14:55.582816   55687 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:14:55.797204   55687 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:14:56.161399   55687 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:14:56.183756   55687 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:14:56.184772   55687 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:14:56.184839   55687 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:14:56.335230   55687 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:14:56.337928   55687 out.go:204]   - Booting up control plane ...
	I0717 01:14:56.338046   55687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:14:56.353312   55687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:14:56.355510   55687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:14:56.356728   55687 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:14:56.361426   55687 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:15:36.355041   55687 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:15:36.356035   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:15:36.356272   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:15:41.356468   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:15:41.356715   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:15:51.355522   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:15:51.355751   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:16:11.354941   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:16:11.355257   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:16:51.356801   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:16:51.357084   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:16:51.357104   55687 kubeadm.go:310] 
	I0717 01:16:51.357162   55687 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:16:51.357240   55687 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:16:51.357260   55687 kubeadm.go:310] 
	I0717 01:16:51.357312   55687 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:16:51.357349   55687 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:16:51.357444   55687 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:16:51.357455   55687 kubeadm.go:310] 
	I0717 01:16:51.357555   55687 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:16:51.357605   55687 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:16:51.357648   55687 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:16:51.357657   55687 kubeadm.go:310] 
	I0717 01:16:51.357790   55687 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:16:51.357891   55687 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:16:51.357900   55687 kubeadm.go:310] 
	I0717 01:16:51.358057   55687 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:16:51.358200   55687 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:16:51.358308   55687 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:16:51.358378   55687 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:16:51.358385   55687 kubeadm.go:310] 
	I0717 01:16:51.359048   55687 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:16:51.359120   55687 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:16:51.359192   55687 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0717 01:16:51.359451   55687 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-249342] and IPs [192.168.61.13 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-249342] and IPs [192.168.61.13 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-249342] and IPs [192.168.61.13 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-249342] and IPs [192.168.61.13 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 01:16:51.359507   55687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:16:51.875761   55687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:16:51.894973   55687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:16:51.906025   55687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:16:51.906042   55687 kubeadm.go:157] found existing configuration files:
	
	I0717 01:16:51.906076   55687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:16:51.917092   55687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:16:51.917143   55687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:16:51.928376   55687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:16:51.940906   55687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:16:51.940961   55687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:16:51.951353   55687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:16:51.961836   55687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:16:51.961882   55687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:16:51.973746   55687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:16:51.986732   55687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:16:51.986787   55687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:16:52.001043   55687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:16:52.101083   55687 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:16:52.101187   55687 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:16:52.268844   55687 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:16:52.269012   55687 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:16:52.269145   55687 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:16:52.490853   55687 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:16:52.493757   55687 out.go:204]   - Generating certificates and keys ...
	I0717 01:16:52.493846   55687 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:16:52.493935   55687 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:16:52.494063   55687 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:16:52.494147   55687 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:16:52.494274   55687 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:16:52.494348   55687 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:16:52.494509   55687 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:16:52.494587   55687 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:16:52.494855   55687 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:16:52.495312   55687 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:16:52.495374   55687 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:16:52.495448   55687 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:16:52.928466   55687 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:16:53.262866   55687 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:16:53.388803   55687 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:16:53.711732   55687 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:16:53.729002   55687 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:16:53.730220   55687 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:16:53.730309   55687 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:16:53.896721   55687 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:16:53.898248   55687 out.go:204]   - Booting up control plane ...
	I0717 01:16:53.898383   55687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:16:53.901808   55687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:16:53.903189   55687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:16:53.911016   55687 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:16:53.914754   55687 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:17:33.918381   55687 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:17:33.918574   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:17:33.918763   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:17:38.919721   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:17:38.920018   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:17:48.921177   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:17:48.921382   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:18:08.920358   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:18:08.920633   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:18:48.919714   55687 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:18:48.919997   55687 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:18:48.920018   55687 kubeadm.go:310] 
	I0717 01:18:48.920068   55687 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:18:48.920123   55687 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:18:48.920141   55687 kubeadm.go:310] 
	I0717 01:18:48.920193   55687 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:18:48.920270   55687 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:18:48.920439   55687 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:18:48.920452   55687 kubeadm.go:310] 
	I0717 01:18:48.920605   55687 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:18:48.920653   55687 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:18:48.920699   55687 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:18:48.920712   55687 kubeadm.go:310] 
	I0717 01:18:48.920850   55687 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:18:48.921011   55687 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:18:48.921030   55687 kubeadm.go:310] 
	I0717 01:18:48.921181   55687 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:18:48.921312   55687 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:18:48.921436   55687 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:18:48.921539   55687 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:18:48.921547   55687 kubeadm.go:310] 
	I0717 01:18:48.922759   55687 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:18:48.922890   55687 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:18:48.922990   55687 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 01:18:48.923065   55687 kubeadm.go:394] duration metric: took 3m56.216312404s to StartCluster
	I0717 01:18:48.923130   55687 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:18:48.923196   55687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:18:48.975778   55687 cri.go:89] found id: ""
	I0717 01:18:48.975805   55687 logs.go:276] 0 containers: []
	W0717 01:18:48.975815   55687 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:18:48.975823   55687 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:18:48.975893   55687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:18:49.010559   55687 cri.go:89] found id: ""
	I0717 01:18:49.010587   55687 logs.go:276] 0 containers: []
	W0717 01:18:49.010598   55687 logs.go:278] No container was found matching "etcd"
	I0717 01:18:49.010605   55687 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:18:49.010667   55687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:18:49.059439   55687 cri.go:89] found id: ""
	I0717 01:18:49.059466   55687 logs.go:276] 0 containers: []
	W0717 01:18:49.059476   55687 logs.go:278] No container was found matching "coredns"
	I0717 01:18:49.059483   55687 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:18:49.059544   55687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:18:49.105204   55687 cri.go:89] found id: ""
	I0717 01:18:49.105237   55687 logs.go:276] 0 containers: []
	W0717 01:18:49.105250   55687 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:18:49.105259   55687 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:18:49.105326   55687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:18:49.147165   55687 cri.go:89] found id: ""
	I0717 01:18:49.147199   55687 logs.go:276] 0 containers: []
	W0717 01:18:49.147212   55687 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:18:49.147221   55687 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:18:49.147294   55687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:18:49.189324   55687 cri.go:89] found id: ""
	I0717 01:18:49.189348   55687 logs.go:276] 0 containers: []
	W0717 01:18:49.189357   55687 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:18:49.189364   55687 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:18:49.189429   55687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:18:49.231002   55687 cri.go:89] found id: ""
	I0717 01:18:49.231034   55687 logs.go:276] 0 containers: []
	W0717 01:18:49.231044   55687 logs.go:278] No container was found matching "kindnet"
	I0717 01:18:49.231057   55687 logs.go:123] Gathering logs for kubelet ...
	I0717 01:18:49.231074   55687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:18:49.291265   55687 logs.go:123] Gathering logs for dmesg ...
	I0717 01:18:49.291300   55687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:18:49.307546   55687 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:18:49.307586   55687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:18:49.503479   55687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:18:49.503504   55687 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:18:49.503519   55687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:18:49.628872   55687 logs.go:123] Gathering logs for container status ...
	I0717 01:18:49.628905   55687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 01:18:49.677815   55687 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 01:18:49.677860   55687 out.go:239] * 
	* 
	W0717 01:18:49.677939   55687 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:18:49.677967   55687 out.go:239] * 
	* 
	W0717 01:18:49.679025   55687 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:18:49.794357   55687 out.go:177] 
	W0717 01:18:49.801330   55687 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:18:49.801405   55687 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 01:18:49.801438   55687 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 01:18:49.806865   55687 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-249342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 6 (286.37187ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:18:50.145323   62951 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-249342" does not appear in /home/jenkins/minikube-integration/19265-12897/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-249342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (318.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-249342 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-249342 create -f testdata/busybox.yaml: exit status 1 (43.467988ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-249342" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-249342 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 6 (274.745605ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:18:50.467242   62995 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-249342" does not appear in /home/jenkins/minikube-integration/19265-12897/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-249342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 6 (230.686398ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:18:50.704656   63025 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-249342" does not appear in /home/jenkins/minikube-integration/19265-12897/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-249342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-249342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-249342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.123385831s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-249342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-249342 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-249342 describe deploy/metrics-server -n kube-system: exit status 1 (52.512667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-249342" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-249342 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 6 (247.548447ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:20:30.124670   64522 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-249342" does not appear in /home/jenkins/minikube-integration/19265-12897/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-249342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (522.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-249342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-249342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m41.027625298s)

                                                
                                                
-- stdout --
	* [old-k8s-version-249342] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-249342" primary control-plane node in "old-k8s-version-249342" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-249342" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:20:36.760971   64655 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:20:36.761107   64655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:20:36.761117   64655 out.go:304] Setting ErrFile to fd 2...
	I0717 01:20:36.761123   64655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:20:36.761367   64655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:20:36.761937   64655 out.go:298] Setting JSON to false
	I0717 01:20:36.762898   64655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7386,"bootTime":1721171851,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:20:36.762960   64655 start.go:139] virtualization: kvm guest
	I0717 01:20:36.765058   64655 out.go:177] * [old-k8s-version-249342] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:20:36.766768   64655 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:20:36.766807   64655 notify.go:220] Checking for updates...
	I0717 01:20:36.769176   64655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:20:36.770481   64655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:20:36.771842   64655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:20:36.773153   64655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:20:36.774648   64655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:20:36.776541   64655 config.go:182] Loaded profile config "old-k8s-version-249342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:20:36.777004   64655 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:20:36.777085   64655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:20:36.795053   64655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0717 01:20:36.795658   64655 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:20:36.796312   64655 main.go:141] libmachine: Using API Version  1
	I0717 01:20:36.796345   64655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:20:36.796748   64655 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:20:36.796958   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:20:36.798545   64655 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:20:36.799754   64655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:20:36.800212   64655 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:20:36.800259   64655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:20:36.817768   64655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I0717 01:20:36.818306   64655 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:20:36.818852   64655 main.go:141] libmachine: Using API Version  1
	I0717 01:20:36.818888   64655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:20:36.819279   64655 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:20:36.819653   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:20:36.862203   64655 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:20:36.863607   64655 start.go:297] selected driver: kvm2
	I0717 01:20:36.863636   64655 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-249342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:20:36.863783   64655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:20:36.864821   64655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:20:36.864911   64655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:20:36.882486   64655 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:20:36.883105   64655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:20:36.883221   64655 cni.go:84] Creating CNI manager for ""
	I0717 01:20:36.883259   64655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:20:36.883333   64655 start.go:340] cluster config:
	{Name:old-k8s-version-249342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-249342 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:20:36.883514   64655 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:20:36.887583   64655 out.go:177] * Starting "old-k8s-version-249342" primary control-plane node in "old-k8s-version-249342" cluster
	I0717 01:20:36.890541   64655 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:20:36.890598   64655 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:20:36.890608   64655 cache.go:56] Caching tarball of preloaded images
	I0717 01:20:36.890718   64655 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:20:36.890732   64655 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:20:36.890876   64655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/config.json ...
	I0717 01:20:36.891119   64655 start.go:360] acquireMachinesLock for old-k8s-version-249342: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:20:50.546352   64655 start.go:364] duration metric: took 13.655204262s to acquireMachinesLock for "old-k8s-version-249342"
	I0717 01:20:50.546397   64655 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:20:50.546404   64655 fix.go:54] fixHost starting: 
	I0717 01:20:50.546887   64655 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:20:50.546942   64655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:20:50.564333   64655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0717 01:20:50.564727   64655 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:20:50.565233   64655 main.go:141] libmachine: Using API Version  1
	I0717 01:20:50.565257   64655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:20:50.565640   64655 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:20:50.565861   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:20:50.566008   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetState
	I0717 01:20:50.567428   64655 fix.go:112] recreateIfNeeded on old-k8s-version-249342: state=Stopped err=<nil>
	I0717 01:20:50.567468   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	W0717 01:20:50.567606   64655 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:20:50.569724   64655 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-249342" ...
	I0717 01:20:50.570976   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .Start
	I0717 01:20:50.571157   64655 main.go:141] libmachine: (old-k8s-version-249342) Ensuring networks are active...
	I0717 01:20:50.571991   64655 main.go:141] libmachine: (old-k8s-version-249342) Ensuring network default is active
	I0717 01:20:50.572406   64655 main.go:141] libmachine: (old-k8s-version-249342) Ensuring network mk-old-k8s-version-249342 is active
	I0717 01:20:50.572908   64655 main.go:141] libmachine: (old-k8s-version-249342) Getting domain xml...
	I0717 01:20:50.573698   64655 main.go:141] libmachine: (old-k8s-version-249342) Creating domain...
	I0717 01:20:52.009135   64655 main.go:141] libmachine: (old-k8s-version-249342) Waiting to get IP...
	I0717 01:20:52.010240   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:52.010762   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:52.010854   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:52.010760   64803 retry.go:31] will retry after 269.530013ms: waiting for machine to come up
	I0717 01:20:52.282453   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:52.282948   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:52.282971   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:52.282887   64803 retry.go:31] will retry after 361.36356ms: waiting for machine to come up
	I0717 01:20:52.645524   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:52.646123   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:52.646152   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:52.646028   64803 retry.go:31] will retry after 332.655589ms: waiting for machine to come up
	I0717 01:20:52.980716   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:52.981318   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:52.981349   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:52.981294   64803 retry.go:31] will retry after 481.217115ms: waiting for machine to come up
	I0717 01:20:53.832905   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:53.833356   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:53.833381   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:53.833320   64803 retry.go:31] will retry after 752.958719ms: waiting for machine to come up
	I0717 01:20:54.588220   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:54.588742   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:54.588764   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:54.588693   64803 retry.go:31] will retry after 927.958489ms: waiting for machine to come up
	I0717 01:20:55.519090   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:55.519618   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:55.519643   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:55.519569   64803 retry.go:31] will retry after 1.055396886s: waiting for machine to come up
	I0717 01:20:56.576354   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:56.576960   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:56.576989   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:56.576903   64803 retry.go:31] will retry after 1.039912103s: waiting for machine to come up
	I0717 01:20:57.618934   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:57.619805   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:57.619833   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:57.619766   64803 retry.go:31] will retry after 1.146109191s: waiting for machine to come up
	I0717 01:20:58.767031   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:20:58.767468   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:20:58.767493   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:20:58.767429   64803 retry.go:31] will retry after 2.228465995s: waiting for machine to come up
	I0717 01:21:00.997757   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:00.998337   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:21:00.998364   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:21:00.998286   64803 retry.go:31] will retry after 2.755533885s: waiting for machine to come up
	I0717 01:21:03.757520   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:03.757995   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:21:03.758044   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:21:03.757973   64803 retry.go:31] will retry after 3.545854358s: waiting for machine to come up
	I0717 01:21:07.306537   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:07.306986   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | unable to find current IP address of domain old-k8s-version-249342 in network mk-old-k8s-version-249342
	I0717 01:21:07.307019   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | I0717 01:21:07.306943   64803 retry.go:31] will retry after 4.450520967s: waiting for machine to come up
	I0717 01:21:11.758639   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:11.759323   64655 main.go:141] libmachine: (old-k8s-version-249342) Found IP for machine: 192.168.61.13
	I0717 01:21:11.759344   64655 main.go:141] libmachine: (old-k8s-version-249342) Reserving static IP address...
	I0717 01:21:11.759357   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has current primary IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:11.759842   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "old-k8s-version-249342", mac: "52:54:00:f3:5b:b9", ip: "192.168.61.13"} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:11.759872   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | skip adding static IP to network mk-old-k8s-version-249342 - found existing host DHCP lease matching {name: "old-k8s-version-249342", mac: "52:54:00:f3:5b:b9", ip: "192.168.61.13"}
	I0717 01:21:11.759889   64655 main.go:141] libmachine: (old-k8s-version-249342) Reserved static IP address: 192.168.61.13
	I0717 01:21:11.759907   64655 main.go:141] libmachine: (old-k8s-version-249342) Waiting for SSH to be available...
	I0717 01:21:11.759920   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | Getting to WaitForSSH function...
	I0717 01:21:11.762207   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:11.762700   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:11.762743   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:11.762890   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | Using SSH client type: external
	I0717 01:21:11.762926   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa (-rw-------)
	I0717 01:21:11.762961   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:21:11.762978   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | About to run SSH command:
	I0717 01:21:11.762992   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | exit 0
	I0717 01:21:11.888880   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | SSH cmd err, output: <nil>: 
	I0717 01:21:11.889274   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetConfigRaw
	I0717 01:21:11.889940   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetIP
	I0717 01:21:11.892487   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:11.892775   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:11.892803   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:11.893029   64655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/config.json ...
	I0717 01:21:11.893268   64655 machine.go:94] provisionDockerMachine start ...
	I0717 01:21:11.893288   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:21:11.893515   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:11.895620   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:11.896021   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:11.896065   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:11.896177   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:11.896416   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:11.896584   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:11.896719   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:11.896877   64655 main.go:141] libmachine: Using SSH client type: native
	I0717 01:21:11.897087   64655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:21:11.897104   64655 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:21:12.005059   64655 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:21:12.005114   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetMachineName
	I0717 01:21:12.005382   64655 buildroot.go:166] provisioning hostname "old-k8s-version-249342"
	I0717 01:21:12.005406   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetMachineName
	I0717 01:21:12.005548   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:12.008129   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.008497   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:12.008518   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.008681   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:12.008856   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:12.009165   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:12.009344   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:12.009571   64655 main.go:141] libmachine: Using SSH client type: native
	I0717 01:21:12.009771   64655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:21:12.009788   64655 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-249342 && echo "old-k8s-version-249342" | sudo tee /etc/hostname
	I0717 01:21:12.131801   64655 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-249342
	
	I0717 01:21:12.131851   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:12.134852   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.135228   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:12.135262   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.135401   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:12.135608   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:12.135800   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:12.135974   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:12.136136   64655 main.go:141] libmachine: Using SSH client type: native
	I0717 01:21:12.136311   64655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:21:12.136327   64655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-249342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-249342/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-249342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:21:12.254439   64655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:21:12.254471   64655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:21:12.254521   64655 buildroot.go:174] setting up certificates
	I0717 01:21:12.254531   64655 provision.go:84] configureAuth start
	I0717 01:21:12.254541   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetMachineName
	I0717 01:21:12.254821   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetIP
	I0717 01:21:12.257341   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.257691   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:12.257734   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.257843   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:12.259916   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.260237   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:12.260260   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.260409   64655 provision.go:143] copyHostCerts
	I0717 01:21:12.260463   64655 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:21:12.260476   64655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:21:12.260534   64655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:21:12.260667   64655 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:21:12.260681   64655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:21:12.260715   64655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:21:12.260780   64655 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:21:12.260789   64655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:21:12.260819   64655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:21:12.260924   64655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-249342 san=[127.0.0.1 192.168.61.13 localhost minikube old-k8s-version-249342]
	I0717 01:21:12.607540   64655 provision.go:177] copyRemoteCerts
	I0717 01:21:12.607608   64655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:21:12.607633   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:12.610269   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.610564   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:12.610594   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.610748   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:12.610973   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:12.611139   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:12.611307   64655 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa Username:docker}
	I0717 01:21:12.694525   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:21:12.718745   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:21:12.743225   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:21:12.767272   64655 provision.go:87] duration metric: took 512.730427ms to configureAuth
	I0717 01:21:12.767302   64655 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:21:12.767460   64655 config.go:182] Loaded profile config "old-k8s-version-249342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:21:12.767537   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:12.770299   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.770634   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:12.770665   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:12.770863   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:12.771088   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:12.771269   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:12.771379   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:12.771498   64655 main.go:141] libmachine: Using SSH client type: native
	I0717 01:21:12.771691   64655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:21:12.771708   64655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:21:13.040904   64655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:21:13.040942   64655 machine.go:97] duration metric: took 1.147659698s to provisionDockerMachine
	I0717 01:21:13.040957   64655 start.go:293] postStartSetup for "old-k8s-version-249342" (driver="kvm2")
	I0717 01:21:13.040970   64655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:21:13.040989   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:21:13.041380   64655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:21:13.041414   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:13.044324   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.044644   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:13.044673   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.044822   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:13.045007   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:13.045159   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:13.045284   64655 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa Username:docker}
	I0717 01:21:13.127252   64655 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:21:13.131386   64655 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:21:13.131411   64655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:21:13.131477   64655 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:21:13.131553   64655 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:21:13.131637   64655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:21:13.140731   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:21:13.164891   64655 start.go:296] duration metric: took 123.917936ms for postStartSetup
	I0717 01:21:13.164941   64655 fix.go:56] duration metric: took 22.618536197s for fixHost
	I0717 01:21:13.164967   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:13.167612   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.168041   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:13.168063   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.168192   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:13.168410   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:13.168616   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:13.168750   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:13.168881   64655 main.go:141] libmachine: Using SSH client type: native
	I0717 01:21:13.169051   64655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I0717 01:21:13.169061   64655 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 01:21:13.277264   64655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721179273.252811431
	
	I0717 01:21:13.277288   64655 fix.go:216] guest clock: 1721179273.252811431
	I0717 01:21:13.277299   64655 fix.go:229] Guest: 2024-07-17 01:21:13.252811431 +0000 UTC Remote: 2024-07-17 01:21:13.164946923 +0000 UTC m=+36.451506885 (delta=87.864508ms)
	I0717 01:21:13.277361   64655 fix.go:200] guest clock delta is within tolerance: 87.864508ms
	I0717 01:21:13.277372   64655 start.go:83] releasing machines lock for "old-k8s-version-249342", held for 22.730995576s
	I0717 01:21:13.277409   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:21:13.277708   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetIP
	I0717 01:21:13.280762   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.281124   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:13.281150   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.281343   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:21:13.281909   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:21:13.282097   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .DriverName
	I0717 01:21:13.282165   64655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:21:13.282224   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:13.282338   64655 ssh_runner.go:195] Run: cat /version.json
	I0717 01:21:13.282360   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHHostname
	I0717 01:21:13.285101   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.285365   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.285568   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:13.285593   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.285705   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:13.285800   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:13.285829   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:13.285880   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:13.285980   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHPort
	I0717 01:21:13.286195   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:13.286194   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHKeyPath
	I0717 01:21:13.286382   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetSSHUsername
	I0717 01:21:13.286399   64655 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa Username:docker}
	I0717 01:21:13.286543   64655 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/old-k8s-version-249342/id_rsa Username:docker}
	I0717 01:21:13.395715   64655 ssh_runner.go:195] Run: systemctl --version
	I0717 01:21:13.402497   64655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:21:13.553616   64655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:21:13.559922   64655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:21:13.559993   64655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:21:13.577130   64655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:21:13.577151   64655 start.go:495] detecting cgroup driver to use...
	I0717 01:21:13.577215   64655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:21:13.598991   64655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:21:13.615138   64655 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:21:13.615190   64655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:21:13.629548   64655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:21:13.643552   64655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:21:13.771548   64655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:21:13.944222   64655 docker.go:233] disabling docker service ...
	I0717 01:21:13.944300   64655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:21:13.961829   64655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:21:13.975936   64655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:21:14.100687   64655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:21:14.226946   64655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:21:14.242159   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:21:14.261741   64655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:21:14.261802   64655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:21:14.271927   64655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:21:14.271991   64655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:21:14.282103   64655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:21:14.292425   64655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:21:14.302691   64655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:21:14.314790   64655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:21:14.324848   64655 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:21:14.324914   64655 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:21:14.340108   64655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:21:14.351034   64655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:21:14.478998   64655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:21:14.629419   64655 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:21:14.629489   64655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:21:14.635041   64655 start.go:563] Will wait 60s for crictl version
	I0717 01:21:14.635167   64655 ssh_runner.go:195] Run: which crictl
	I0717 01:21:14.639163   64655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:21:14.682447   64655 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:21:14.682536   64655 ssh_runner.go:195] Run: crio --version
	I0717 01:21:14.715143   64655 ssh_runner.go:195] Run: crio --version
	I0717 01:21:14.750835   64655 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:21:14.752056   64655 main.go:141] libmachine: (old-k8s-version-249342) Calling .GetIP
	I0717 01:21:14.755405   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:14.755829   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:5b:b9", ip: ""} in network mk-old-k8s-version-249342: {Iface:virbr3 ExpiryTime:2024-07-17 02:21:02 +0000 UTC Type:0 Mac:52:54:00:f3:5b:b9 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:old-k8s-version-249342 Clientid:01:52:54:00:f3:5b:b9}
	I0717 01:21:14.755855   64655 main.go:141] libmachine: (old-k8s-version-249342) DBG | domain old-k8s-version-249342 has defined IP address 192.168.61.13 and MAC address 52:54:00:f3:5b:b9 in network mk-old-k8s-version-249342
	I0717 01:21:14.756138   64655 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:21:14.760768   64655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:21:14.773535   64655 kubeadm.go:883] updating cluster {Name:old-k8s-version-249342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:21:14.773661   64655 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:21:14.773719   64655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:21:14.821021   64655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:21:14.821080   64655 ssh_runner.go:195] Run: which lz4
	I0717 01:21:14.825818   64655 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 01:21:14.830948   64655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:21:14.830980   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:21:16.583176   64655 crio.go:462] duration metric: took 1.757387623s to copy over tarball
	I0717 01:21:16.583249   64655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:21:19.504532   64655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.921246098s)
	I0717 01:21:19.504590   64655 crio.go:469] duration metric: took 2.921388564s to extract the tarball
	I0717 01:21:19.504601   64655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:21:19.550083   64655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:21:19.588287   64655 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:21:19.588314   64655 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:21:19.588393   64655 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:21:19.588393   64655 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:21:19.588441   64655 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:21:19.588455   64655 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:21:19.588474   64655 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:21:19.588494   64655 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:21:19.588423   64655 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:21:19.588395   64655 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:21:19.590328   64655 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:21:19.590341   64655 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:21:19.590327   64655 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:21:19.590333   64655 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:21:19.590430   64655 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:21:19.590454   64655 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:21:19.590573   64655 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:21:19.590966   64655 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:21:19.738371   64655 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:21:19.740389   64655 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:21:19.741425   64655 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:21:19.745579   64655 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:21:19.746255   64655 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:21:19.751213   64655 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:21:19.871702   64655 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:21:19.906764   64655 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:21:19.906799   64655 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:21:19.906843   64655 ssh_runner.go:195] Run: which crictl
	I0717 01:21:19.908360   64655 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:21:19.908373   64655 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:21:19.908396   64655 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:21:19.908399   64655 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:21:19.908403   64655 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:21:19.908413   64655 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:21:19.908440   64655 ssh_runner.go:195] Run: which crictl
	I0717 01:21:19.908443   64655 ssh_runner.go:195] Run: which crictl
	I0717 01:21:19.908443   64655 ssh_runner.go:195] Run: which crictl
	I0717 01:21:19.913869   64655 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:21:19.913907   64655 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:21:19.913955   64655 ssh_runner.go:195] Run: which crictl
	I0717 01:21:19.931740   64655 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:21:19.931788   64655 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:21:19.931832   64655 ssh_runner.go:195] Run: which crictl
	I0717 01:21:19.952568   64655 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:21:20.063249   64655 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:21:20.063287   64655 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:21:20.063322   64655 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:21:20.063332   64655 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:21:20.063356   64655 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:21:20.063419   64655 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:21:20.063429   64655 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:21:20.063457   64655 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:21:20.063484   64655 ssh_runner.go:195] Run: which crictl
	I0717 01:21:20.203724   64655 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:21:20.203737   64655 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:21:20.203797   64655 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:21:20.203854   64655 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:21:20.203916   64655 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:21:20.203924   64655 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:21:20.203956   64655 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:21:20.236705   64655 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:21:20.236774   64655 cache_images.go:92] duration metric: took 648.444512ms to LoadCachedImages
	W0717 01:21:20.236860   64655 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0717 01:21:20.236878   64655 kubeadm.go:934] updating node { 192.168.61.13 8443 v1.20.0 crio true true} ...
	I0717 01:21:20.236987   64655 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-249342 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:21:20.237063   64655 ssh_runner.go:195] Run: crio config
	I0717 01:21:20.286802   64655 cni.go:84] Creating CNI manager for ""
	I0717 01:21:20.286826   64655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:21:20.286838   64655 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:21:20.286854   64655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.13 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-249342 NodeName:old-k8s-version-249342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:21:20.286991   64655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-249342"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:21:20.287065   64655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:21:20.297784   64655 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:21:20.297869   64655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:21:20.308431   64655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:21:20.326613   64655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:21:20.344523   64655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:21:20.363839   64655 ssh_runner.go:195] Run: grep 192.168.61.13	control-plane.minikube.internal$ /etc/hosts
	I0717 01:21:20.367948   64655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:21:20.382667   64655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:21:20.524465   64655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:21:20.542049   64655 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342 for IP: 192.168.61.13
	I0717 01:21:20.542126   64655 certs.go:194] generating shared ca certs ...
	I0717 01:21:20.542153   64655 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:21:20.542333   64655 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:21:20.542392   64655 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:21:20.542406   64655 certs.go:256] generating profile certs ...
	I0717 01:21:20.542581   64655 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.key
	I0717 01:21:20.542650   64655 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.key.e96f0644
	I0717 01:21:20.542713   64655 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.key
	I0717 01:21:20.542838   64655 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:21:20.542867   64655 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:21:20.542873   64655 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:21:20.542891   64655 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:21:20.542917   64655 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:21:20.542937   64655 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:21:20.542978   64655 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:21:20.543571   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:21:20.574526   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:21:20.614448   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:21:20.640565   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:21:20.672983   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:21:20.703065   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:21:20.729659   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:21:20.759910   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:21:20.805493   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:21:20.842631   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:21:20.867873   64655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:21:20.893813   64655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:21:20.912589   64655 ssh_runner.go:195] Run: openssl version
	I0717 01:21:20.918731   64655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:21:20.932745   64655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:21:20.938840   64655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:21:20.938907   64655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:21:20.947006   64655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:21:20.958880   64655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:21:20.970408   64655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:21:20.975096   64655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:21:20.975175   64655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:21:20.981354   64655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:21:20.993199   64655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:21:21.004162   64655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:21:21.009032   64655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:21:21.009108   64655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:21:21.015046   64655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:21:21.026565   64655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:21:21.031384   64655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:21:21.037264   64655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:21:21.043555   64655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:21:21.051797   64655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:21:21.060218   64655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:21:21.066719   64655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:21:21.073712   64655 kubeadm.go:392] StartCluster: {Name:old-k8s-version-249342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-249342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:21:21.073835   64655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:21:21.073900   64655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:21:21.125675   64655 cri.go:89] found id: ""
	I0717 01:21:21.125753   64655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:21:21.138093   64655 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:21:21.138118   64655 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:21:21.138168   64655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:21:21.152182   64655 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:21:21.153603   64655 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-249342" does not appear in /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:21:21.154482   64655 kubeconfig.go:62] /home/jenkins/minikube-integration/19265-12897/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-249342" cluster setting kubeconfig missing "old-k8s-version-249342" context setting]
	I0717 01:21:21.155818   64655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:21:21.189655   64655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:21:21.200433   64655 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.13
	I0717 01:21:21.200470   64655 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:21:21.200484   64655 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:21:21.200544   64655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:21:21.243176   64655 cri.go:89] found id: ""
	I0717 01:21:21.243267   64655 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:21:21.266655   64655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:21:21.277390   64655 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:21:21.277412   64655 kubeadm.go:157] found existing configuration files:
	
	I0717 01:21:21.277455   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:21:21.288137   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:21:21.288203   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:21:21.297963   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:21:21.308166   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:21:21.308237   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:21:21.320535   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:21:21.330462   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:21:21.330547   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:21:21.340972   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:21:21.350546   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:21:21.350616   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:21:21.361904   64655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:21:21.372530   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:21:21.551248   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:21:22.513434   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:21:22.830754   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:21:22.997174   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:21:23.103582   64655 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:21:23.103681   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:23.604773   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:24.104082   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:24.604006   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:25.103947   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:25.604340   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:26.103988   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:26.604462   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:27.104208   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:27.603830   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:28.104683   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:28.603815   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:29.104515   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:29.604119   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:30.103789   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:30.603784   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:31.104599   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:31.604520   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:32.104520   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:32.604065   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:33.104593   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:33.603986   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:34.104250   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:34.604378   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:35.104573   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:35.604002   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:36.103760   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:36.604302   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:37.104276   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:37.603764   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:38.104312   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:38.604673   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:39.103929   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:39.604748   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:40.104737   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:40.604408   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:41.104235   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:41.604744   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:42.103929   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:42.603825   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:43.104576   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:43.604483   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:44.104471   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:44.603896   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:45.104293   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:45.603952   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:46.104427   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:46.604466   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:47.104170   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:47.603765   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:48.104653   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:48.603751   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:49.104602   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:49.603806   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:50.104453   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:50.603981   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:51.103916   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:51.604645   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:52.104323   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:52.604823   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:53.103881   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:53.604285   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:54.104718   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:54.604326   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:55.104776   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:55.604468   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:56.103903   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:56.604526   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:57.104014   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:57.604741   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:58.103978   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:58.604032   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:59.103806   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:21:59.604654   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:00.104344   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:00.603889   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:01.103833   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:01.604031   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:02.104214   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:02.604136   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:03.104568   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:03.604493   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:04.103744   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:04.604663   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:05.103981   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:05.603977   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:06.103774   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:06.604372   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:07.104210   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:07.603823   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:08.104160   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:08.604173   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:09.104044   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:09.604549   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:10.104117   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:10.604273   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:11.103791   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:11.603752   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:12.104155   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:12.603865   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:13.104091   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:13.604322   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:14.103782   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:14.604615   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:15.104142   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:15.604722   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:16.104404   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:16.603851   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:17.104584   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:17.604377   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:18.104443   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:18.604645   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:19.104142   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:19.604109   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:20.104233   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:20.604148   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:21.104218   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:21.604361   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:22.103944   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:22.604698   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:23.104528   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:23.104606   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:23.144000   64655 cri.go:89] found id: ""
	I0717 01:22:23.144026   64655 logs.go:276] 0 containers: []
	W0717 01:22:23.144036   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:23.144044   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:23.144097   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:23.180004   64655 cri.go:89] found id: ""
	I0717 01:22:23.180025   64655 logs.go:276] 0 containers: []
	W0717 01:22:23.180032   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:23.180037   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:23.180089   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:23.227998   64655 cri.go:89] found id: ""
	I0717 01:22:23.228022   64655 logs.go:276] 0 containers: []
	W0717 01:22:23.228030   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:23.228037   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:23.228092   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:23.267200   64655 cri.go:89] found id: ""
	I0717 01:22:23.267232   64655 logs.go:276] 0 containers: []
	W0717 01:22:23.267243   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:23.267251   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:23.267310   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:23.315950   64655 cri.go:89] found id: ""
	I0717 01:22:23.315975   64655 logs.go:276] 0 containers: []
	W0717 01:22:23.315983   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:23.315988   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:23.316048   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:23.351688   64655 cri.go:89] found id: ""
	I0717 01:22:23.351712   64655 logs.go:276] 0 containers: []
	W0717 01:22:23.351719   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:23.351728   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:23.351782   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:23.393541   64655 cri.go:89] found id: ""
	I0717 01:22:23.393566   64655 logs.go:276] 0 containers: []
	W0717 01:22:23.393576   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:23.393583   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:23.393642   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:23.429539   64655 cri.go:89] found id: ""
	I0717 01:22:23.429570   64655 logs.go:276] 0 containers: []
	W0717 01:22:23.429580   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:23.429591   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:23.429608   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:23.575767   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:23.575790   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:23.575805   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:23.656015   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:23.656053   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:23.699766   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:23.699797   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:23.749133   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:23.749165   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:26.263567   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:26.277277   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:26.277349   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:26.312160   64655 cri.go:89] found id: ""
	I0717 01:22:26.312184   64655 logs.go:276] 0 containers: []
	W0717 01:22:26.312192   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:26.312197   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:26.312252   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:26.349436   64655 cri.go:89] found id: ""
	I0717 01:22:26.349465   64655 logs.go:276] 0 containers: []
	W0717 01:22:26.349477   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:26.349484   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:26.349545   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:26.385480   64655 cri.go:89] found id: ""
	I0717 01:22:26.385508   64655 logs.go:276] 0 containers: []
	W0717 01:22:26.385517   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:26.385523   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:26.385573   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:26.421292   64655 cri.go:89] found id: ""
	I0717 01:22:26.421318   64655 logs.go:276] 0 containers: []
	W0717 01:22:26.421327   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:26.421333   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:26.421388   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:26.455873   64655 cri.go:89] found id: ""
	I0717 01:22:26.455913   64655 logs.go:276] 0 containers: []
	W0717 01:22:26.455924   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:26.455932   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:26.455987   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:26.489452   64655 cri.go:89] found id: ""
	I0717 01:22:26.489488   64655 logs.go:276] 0 containers: []
	W0717 01:22:26.489500   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:26.489507   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:26.489566   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:26.525461   64655 cri.go:89] found id: ""
	I0717 01:22:26.525487   64655 logs.go:276] 0 containers: []
	W0717 01:22:26.525495   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:26.525501   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:26.525549   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:26.564429   64655 cri.go:89] found id: ""
	I0717 01:22:26.564456   64655 logs.go:276] 0 containers: []
	W0717 01:22:26.564463   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:26.564472   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:26.564483   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:26.621836   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:26.621866   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:26.635570   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:26.635593   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:26.713865   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:26.713886   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:26.713898   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:26.782276   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:26.782304   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:29.326065   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:29.340419   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:29.340486   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:29.374023   64655 cri.go:89] found id: ""
	I0717 01:22:29.374049   64655 logs.go:276] 0 containers: []
	W0717 01:22:29.374057   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:29.374066   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:29.374122   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:29.408720   64655 cri.go:89] found id: ""
	I0717 01:22:29.408752   64655 logs.go:276] 0 containers: []
	W0717 01:22:29.408764   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:29.408773   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:29.408848   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:29.442436   64655 cri.go:89] found id: ""
	I0717 01:22:29.442459   64655 logs.go:276] 0 containers: []
	W0717 01:22:29.442467   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:29.442473   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:29.442528   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:29.475737   64655 cri.go:89] found id: ""
	I0717 01:22:29.475764   64655 logs.go:276] 0 containers: []
	W0717 01:22:29.475776   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:29.475782   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:29.475835   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:29.510116   64655 cri.go:89] found id: ""
	I0717 01:22:29.510149   64655 logs.go:276] 0 containers: []
	W0717 01:22:29.510157   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:29.510162   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:29.510212   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:29.545124   64655 cri.go:89] found id: ""
	I0717 01:22:29.545147   64655 logs.go:276] 0 containers: []
	W0717 01:22:29.545154   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:29.545160   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:29.545204   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:29.577800   64655 cri.go:89] found id: ""
	I0717 01:22:29.577829   64655 logs.go:276] 0 containers: []
	W0717 01:22:29.577839   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:29.577845   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:29.577895   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:29.609551   64655 cri.go:89] found id: ""
	I0717 01:22:29.609581   64655 logs.go:276] 0 containers: []
	W0717 01:22:29.609591   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:29.609606   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:29.609625   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:29.622557   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:29.622583   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:29.694112   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:29.694134   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:29.694150   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:29.767688   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:29.767724   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:29.808942   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:29.808970   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:32.362693   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:32.375531   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:32.375616   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:32.411113   64655 cri.go:89] found id: ""
	I0717 01:22:32.411141   64655 logs.go:276] 0 containers: []
	W0717 01:22:32.411151   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:32.411159   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:32.411345   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:32.461839   64655 cri.go:89] found id: ""
	I0717 01:22:32.461868   64655 logs.go:276] 0 containers: []
	W0717 01:22:32.461878   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:32.461886   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:32.461955   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:32.501158   64655 cri.go:89] found id: ""
	I0717 01:22:32.501190   64655 logs.go:276] 0 containers: []
	W0717 01:22:32.501200   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:32.501207   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:32.501270   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:32.534386   64655 cri.go:89] found id: ""
	I0717 01:22:32.534421   64655 logs.go:276] 0 containers: []
	W0717 01:22:32.534437   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:32.534444   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:32.534503   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:32.573034   64655 cri.go:89] found id: ""
	I0717 01:22:32.573061   64655 logs.go:276] 0 containers: []
	W0717 01:22:32.573072   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:32.573079   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:32.573151   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:32.609860   64655 cri.go:89] found id: ""
	I0717 01:22:32.609888   64655 logs.go:276] 0 containers: []
	W0717 01:22:32.609897   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:32.609903   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:32.609955   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:32.654699   64655 cri.go:89] found id: ""
	I0717 01:22:32.654730   64655 logs.go:276] 0 containers: []
	W0717 01:22:32.654742   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:32.654749   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:32.654813   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:32.691709   64655 cri.go:89] found id: ""
	I0717 01:22:32.691741   64655 logs.go:276] 0 containers: []
	W0717 01:22:32.691753   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:32.691764   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:32.691781   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:32.752361   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:32.752398   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:32.768154   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:32.768183   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:32.845231   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:32.845250   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:32.845263   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:32.923269   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:32.923302   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:35.467248   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:35.480850   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:35.480915   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:35.515707   64655 cri.go:89] found id: ""
	I0717 01:22:35.515736   64655 logs.go:276] 0 containers: []
	W0717 01:22:35.515743   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:35.515749   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:35.515796   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:35.549348   64655 cri.go:89] found id: ""
	I0717 01:22:35.549373   64655 logs.go:276] 0 containers: []
	W0717 01:22:35.549381   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:35.549386   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:35.549447   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:35.583226   64655 cri.go:89] found id: ""
	I0717 01:22:35.583251   64655 logs.go:276] 0 containers: []
	W0717 01:22:35.583262   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:35.583269   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:35.583328   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:35.616600   64655 cri.go:89] found id: ""
	I0717 01:22:35.616628   64655 logs.go:276] 0 containers: []
	W0717 01:22:35.616636   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:35.616642   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:35.616698   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:35.651050   64655 cri.go:89] found id: ""
	I0717 01:22:35.651074   64655 logs.go:276] 0 containers: []
	W0717 01:22:35.651092   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:35.651099   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:35.651153   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:35.684807   64655 cri.go:89] found id: ""
	I0717 01:22:35.684839   64655 logs.go:276] 0 containers: []
	W0717 01:22:35.684849   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:35.684857   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:35.684933   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:35.719933   64655 cri.go:89] found id: ""
	I0717 01:22:35.719960   64655 logs.go:276] 0 containers: []
	W0717 01:22:35.719970   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:35.719979   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:35.720043   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:35.755366   64655 cri.go:89] found id: ""
	I0717 01:22:35.755400   64655 logs.go:276] 0 containers: []
	W0717 01:22:35.755411   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:35.755422   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:35.755437   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:35.768206   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:35.768235   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:35.834836   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:35.834861   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:35.834876   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:35.917062   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:35.917099   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:35.954238   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:35.954273   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:38.506227   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:38.519451   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:38.519518   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:38.560425   64655 cri.go:89] found id: ""
	I0717 01:22:38.560454   64655 logs.go:276] 0 containers: []
	W0717 01:22:38.560464   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:38.560470   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:38.560532   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:38.601809   64655 cri.go:89] found id: ""
	I0717 01:22:38.601835   64655 logs.go:276] 0 containers: []
	W0717 01:22:38.601843   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:38.601849   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:38.601898   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:38.639988   64655 cri.go:89] found id: ""
	I0717 01:22:38.640012   64655 logs.go:276] 0 containers: []
	W0717 01:22:38.640019   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:38.640025   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:38.640092   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:38.673090   64655 cri.go:89] found id: ""
	I0717 01:22:38.673126   64655 logs.go:276] 0 containers: []
	W0717 01:22:38.673134   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:38.673140   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:38.673195   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:38.706226   64655 cri.go:89] found id: ""
	I0717 01:22:38.706252   64655 logs.go:276] 0 containers: []
	W0717 01:22:38.706263   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:38.706270   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:38.706331   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:38.740886   64655 cri.go:89] found id: ""
	I0717 01:22:38.740913   64655 logs.go:276] 0 containers: []
	W0717 01:22:38.740923   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:38.740931   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:38.740992   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:38.773418   64655 cri.go:89] found id: ""
	I0717 01:22:38.773443   64655 logs.go:276] 0 containers: []
	W0717 01:22:38.773452   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:38.773459   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:38.773520   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:38.809344   64655 cri.go:89] found id: ""
	I0717 01:22:38.809377   64655 logs.go:276] 0 containers: []
	W0717 01:22:38.809386   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:38.809395   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:38.809408   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:38.861923   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:38.861957   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:38.877714   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:38.877739   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:38.954645   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:38.954667   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:38.954679   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:39.032710   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:39.032748   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:41.572820   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:41.585839   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:41.585896   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:41.618965   64655 cri.go:89] found id: ""
	I0717 01:22:41.618990   64655 logs.go:276] 0 containers: []
	W0717 01:22:41.618997   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:41.619003   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:41.619050   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:41.652388   64655 cri.go:89] found id: ""
	I0717 01:22:41.652411   64655 logs.go:276] 0 containers: []
	W0717 01:22:41.652418   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:41.652424   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:41.652470   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:41.689279   64655 cri.go:89] found id: ""
	I0717 01:22:41.689307   64655 logs.go:276] 0 containers: []
	W0717 01:22:41.689315   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:41.689321   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:41.689383   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:41.722220   64655 cri.go:89] found id: ""
	I0717 01:22:41.722255   64655 logs.go:276] 0 containers: []
	W0717 01:22:41.722266   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:41.722275   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:41.722327   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:41.759133   64655 cri.go:89] found id: ""
	I0717 01:22:41.759155   64655 logs.go:276] 0 containers: []
	W0717 01:22:41.759164   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:41.759170   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:41.759217   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:41.796702   64655 cri.go:89] found id: ""
	I0717 01:22:41.796724   64655 logs.go:276] 0 containers: []
	W0717 01:22:41.796732   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:41.796738   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:41.796784   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:41.832708   64655 cri.go:89] found id: ""
	I0717 01:22:41.832731   64655 logs.go:276] 0 containers: []
	W0717 01:22:41.832739   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:41.832744   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:41.832789   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:41.866804   64655 cri.go:89] found id: ""
	I0717 01:22:41.866832   64655 logs.go:276] 0 containers: []
	W0717 01:22:41.866847   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:41.866859   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:41.866875   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:41.880140   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:41.880177   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:41.947570   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:41.947598   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:41.947617   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:42.023331   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:42.023362   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:42.061845   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:42.061882   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:44.613829   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:44.626566   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:44.626626   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:44.660076   64655 cri.go:89] found id: ""
	I0717 01:22:44.660101   64655 logs.go:276] 0 containers: []
	W0717 01:22:44.660108   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:44.660114   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:44.660174   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:44.694428   64655 cri.go:89] found id: ""
	I0717 01:22:44.694454   64655 logs.go:276] 0 containers: []
	W0717 01:22:44.694463   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:44.694469   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:44.694538   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:44.730938   64655 cri.go:89] found id: ""
	I0717 01:22:44.730965   64655 logs.go:276] 0 containers: []
	W0717 01:22:44.730975   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:44.730983   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:44.731038   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:44.765636   64655 cri.go:89] found id: ""
	I0717 01:22:44.765677   64655 logs.go:276] 0 containers: []
	W0717 01:22:44.765707   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:44.765719   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:44.765779   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:44.801489   64655 cri.go:89] found id: ""
	I0717 01:22:44.801516   64655 logs.go:276] 0 containers: []
	W0717 01:22:44.801523   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:44.801529   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:44.801583   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:44.834848   64655 cri.go:89] found id: ""
	I0717 01:22:44.834884   64655 logs.go:276] 0 containers: []
	W0717 01:22:44.834896   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:44.834904   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:44.834965   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:44.869282   64655 cri.go:89] found id: ""
	I0717 01:22:44.869313   64655 logs.go:276] 0 containers: []
	W0717 01:22:44.869323   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:44.869336   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:44.869420   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:44.902157   64655 cri.go:89] found id: ""
	I0717 01:22:44.902180   64655 logs.go:276] 0 containers: []
	W0717 01:22:44.902187   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:44.902194   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:44.902206   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:44.940638   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:44.940667   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:44.988391   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:44.988421   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:45.002296   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:45.002324   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:45.071815   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:45.071841   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:45.071856   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:47.655153   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:47.667827   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:47.667897   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:47.708277   64655 cri.go:89] found id: ""
	I0717 01:22:47.708302   64655 logs.go:276] 0 containers: []
	W0717 01:22:47.708310   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:47.708316   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:47.708377   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:47.745597   64655 cri.go:89] found id: ""
	I0717 01:22:47.745620   64655 logs.go:276] 0 containers: []
	W0717 01:22:47.745630   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:47.745637   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:47.745694   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:47.778599   64655 cri.go:89] found id: ""
	I0717 01:22:47.778624   64655 logs.go:276] 0 containers: []
	W0717 01:22:47.778632   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:47.778638   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:47.778695   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:47.813341   64655 cri.go:89] found id: ""
	I0717 01:22:47.813366   64655 logs.go:276] 0 containers: []
	W0717 01:22:47.813376   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:47.813384   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:47.813444   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:47.854998   64655 cri.go:89] found id: ""
	I0717 01:22:47.855030   64655 logs.go:276] 0 containers: []
	W0717 01:22:47.855043   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:47.855053   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:47.855132   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:47.890421   64655 cri.go:89] found id: ""
	I0717 01:22:47.890452   64655 logs.go:276] 0 containers: []
	W0717 01:22:47.890463   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:47.890471   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:47.890528   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:47.921981   64655 cri.go:89] found id: ""
	I0717 01:22:47.922008   64655 logs.go:276] 0 containers: []
	W0717 01:22:47.922019   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:47.922026   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:47.922082   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:47.959185   64655 cri.go:89] found id: ""
	I0717 01:22:47.959208   64655 logs.go:276] 0 containers: []
	W0717 01:22:47.959215   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:47.959223   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:47.959235   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:48.010862   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:48.010893   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:48.026961   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:48.026985   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:48.094570   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:48.094600   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:48.094616   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:48.177812   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:48.177847   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:50.716941   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:50.731564   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:50.731642   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:50.767796   64655 cri.go:89] found id: ""
	I0717 01:22:50.767821   64655 logs.go:276] 0 containers: []
	W0717 01:22:50.767829   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:50.767840   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:50.767895   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:50.804050   64655 cri.go:89] found id: ""
	I0717 01:22:50.804078   64655 logs.go:276] 0 containers: []
	W0717 01:22:50.804086   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:50.804094   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:50.804153   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:50.837565   64655 cri.go:89] found id: ""
	I0717 01:22:50.837588   64655 logs.go:276] 0 containers: []
	W0717 01:22:50.837595   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:50.837602   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:50.837659   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:50.874353   64655 cri.go:89] found id: ""
	I0717 01:22:50.874374   64655 logs.go:276] 0 containers: []
	W0717 01:22:50.874381   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:50.874388   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:50.874448   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:50.910377   64655 cri.go:89] found id: ""
	I0717 01:22:50.910403   64655 logs.go:276] 0 containers: []
	W0717 01:22:50.910414   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:50.910421   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:50.910477   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:50.945195   64655 cri.go:89] found id: ""
	I0717 01:22:50.945226   64655 logs.go:276] 0 containers: []
	W0717 01:22:50.945238   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:50.945245   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:50.945301   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:50.979287   64655 cri.go:89] found id: ""
	I0717 01:22:50.979314   64655 logs.go:276] 0 containers: []
	W0717 01:22:50.979322   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:50.979328   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:50.979386   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:51.017809   64655 cri.go:89] found id: ""
	I0717 01:22:51.017833   64655 logs.go:276] 0 containers: []
	W0717 01:22:51.017841   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:51.017849   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:51.017860   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:51.069974   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:51.070009   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:51.083331   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:51.083359   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:51.154343   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:51.154367   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:51.154378   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:51.228693   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:51.228727   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:53.770649   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:53.784045   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:53.784115   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:53.821825   64655 cri.go:89] found id: ""
	I0717 01:22:53.821849   64655 logs.go:276] 0 containers: []
	W0717 01:22:53.821857   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:53.821863   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:53.821918   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:53.855025   64655 cri.go:89] found id: ""
	I0717 01:22:53.855050   64655 logs.go:276] 0 containers: []
	W0717 01:22:53.855057   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:53.855062   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:53.855116   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:53.895872   64655 cri.go:89] found id: ""
	I0717 01:22:53.895900   64655 logs.go:276] 0 containers: []
	W0717 01:22:53.895907   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:53.895913   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:53.895969   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:53.930118   64655 cri.go:89] found id: ""
	I0717 01:22:53.930146   64655 logs.go:276] 0 containers: []
	W0717 01:22:53.930161   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:53.930171   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:53.930238   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:53.963759   64655 cri.go:89] found id: ""
	I0717 01:22:53.963781   64655 logs.go:276] 0 containers: []
	W0717 01:22:53.963789   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:53.963794   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:53.963850   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:53.997218   64655 cri.go:89] found id: ""
	I0717 01:22:53.997253   64655 logs.go:276] 0 containers: []
	W0717 01:22:53.997266   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:53.997275   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:53.997335   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:54.028835   64655 cri.go:89] found id: ""
	I0717 01:22:54.028872   64655 logs.go:276] 0 containers: []
	W0717 01:22:54.028882   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:54.028889   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:54.028952   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:54.062451   64655 cri.go:89] found id: ""
	I0717 01:22:54.062486   64655 logs.go:276] 0 containers: []
	W0717 01:22:54.062499   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:54.062512   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:54.062527   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:54.113322   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:54.113359   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:54.126695   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:54.126719   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:54.190921   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:54.190950   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:54.190966   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:54.271328   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:54.271405   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:56.810680   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:56.824773   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:56.824846   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:56.859448   64655 cri.go:89] found id: ""
	I0717 01:22:56.859481   64655 logs.go:276] 0 containers: []
	W0717 01:22:56.859490   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:56.859496   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:56.859549   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:56.892387   64655 cri.go:89] found id: ""
	I0717 01:22:56.892417   64655 logs.go:276] 0 containers: []
	W0717 01:22:56.892430   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:56.892438   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:56.892506   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:56.927874   64655 cri.go:89] found id: ""
	I0717 01:22:56.927906   64655 logs.go:276] 0 containers: []
	W0717 01:22:56.927914   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:56.927920   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:56.927968   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:22:56.960444   64655 cri.go:89] found id: ""
	I0717 01:22:56.960472   64655 logs.go:276] 0 containers: []
	W0717 01:22:56.960482   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:22:56.960490   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:22:56.960551   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:22:56.995731   64655 cri.go:89] found id: ""
	I0717 01:22:56.995766   64655 logs.go:276] 0 containers: []
	W0717 01:22:56.995777   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:22:56.995786   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:22:56.995856   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:22:57.030552   64655 cri.go:89] found id: ""
	I0717 01:22:57.030580   64655 logs.go:276] 0 containers: []
	W0717 01:22:57.030590   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:22:57.030598   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:22:57.030660   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:22:57.064174   64655 cri.go:89] found id: ""
	I0717 01:22:57.064199   64655 logs.go:276] 0 containers: []
	W0717 01:22:57.064207   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:22:57.064213   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:22:57.064262   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:22:57.095881   64655 cri.go:89] found id: ""
	I0717 01:22:57.095906   64655 logs.go:276] 0 containers: []
	W0717 01:22:57.095913   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:22:57.095921   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:22:57.095933   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:22:57.108056   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:22:57.108082   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:22:57.172624   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:22:57.172647   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:22:57.172660   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:22:57.251360   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:22:57.251389   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:22:57.291651   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:22:57.291681   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:22:59.843858   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:22:59.857388   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:22:59.857467   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:22:59.895425   64655 cri.go:89] found id: ""
	I0717 01:22:59.895465   64655 logs.go:276] 0 containers: []
	W0717 01:22:59.895478   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:22:59.895487   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:22:59.895556   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:22:59.929928   64655 cri.go:89] found id: ""
	I0717 01:22:59.929959   64655 logs.go:276] 0 containers: []
	W0717 01:22:59.929970   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:22:59.929977   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:22:59.930038   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:22:59.962273   64655 cri.go:89] found id: ""
	I0717 01:22:59.962298   64655 logs.go:276] 0 containers: []
	W0717 01:22:59.962308   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:22:59.962316   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:22:59.962378   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:00.011745   64655 cri.go:89] found id: ""
	I0717 01:23:00.011779   64655 logs.go:276] 0 containers: []
	W0717 01:23:00.011791   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:00.011799   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:00.011856   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:00.045963   64655 cri.go:89] found id: ""
	I0717 01:23:00.045995   64655 logs.go:276] 0 containers: []
	W0717 01:23:00.046005   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:00.046012   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:00.046081   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:00.080799   64655 cri.go:89] found id: ""
	I0717 01:23:00.080830   64655 logs.go:276] 0 containers: []
	W0717 01:23:00.080845   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:00.080853   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:00.080915   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:00.119904   64655 cri.go:89] found id: ""
	I0717 01:23:00.119927   64655 logs.go:276] 0 containers: []
	W0717 01:23:00.119934   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:00.119940   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:00.119985   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:00.157710   64655 cri.go:89] found id: ""
	I0717 01:23:00.157742   64655 logs.go:276] 0 containers: []
	W0717 01:23:00.157750   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:00.157758   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:00.157775   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:00.209326   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:00.209366   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:00.223279   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:00.223314   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:00.293661   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:00.293681   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:00.293693   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:00.372070   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:00.372110   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:02.911360   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:02.925143   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:02.925206   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:02.958775   64655 cri.go:89] found id: ""
	I0717 01:23:02.958801   64655 logs.go:276] 0 containers: []
	W0717 01:23:02.958811   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:02.958818   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:02.958881   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:02.990952   64655 cri.go:89] found id: ""
	I0717 01:23:02.990977   64655 logs.go:276] 0 containers: []
	W0717 01:23:02.990985   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:02.990991   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:02.991040   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:03.022140   64655 cri.go:89] found id: ""
	I0717 01:23:03.022165   64655 logs.go:276] 0 containers: []
	W0717 01:23:03.022172   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:03.022177   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:03.022223   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:03.063105   64655 cri.go:89] found id: ""
	I0717 01:23:03.063134   64655 logs.go:276] 0 containers: []
	W0717 01:23:03.063145   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:03.063152   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:03.063217   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:03.097807   64655 cri.go:89] found id: ""
	I0717 01:23:03.097833   64655 logs.go:276] 0 containers: []
	W0717 01:23:03.097840   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:03.097846   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:03.097895   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:03.134373   64655 cri.go:89] found id: ""
	I0717 01:23:03.134403   64655 logs.go:276] 0 containers: []
	W0717 01:23:03.134413   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:03.134427   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:03.134489   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:03.167848   64655 cri.go:89] found id: ""
	I0717 01:23:03.167873   64655 logs.go:276] 0 containers: []
	W0717 01:23:03.167888   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:03.167896   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:03.167954   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:03.201220   64655 cri.go:89] found id: ""
	I0717 01:23:03.201241   64655 logs.go:276] 0 containers: []
	W0717 01:23:03.201250   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:03.201260   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:03.201276   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:03.238102   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:03.238126   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:03.289609   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:03.289641   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:03.303721   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:03.303743   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:03.367401   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:03.367441   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:03.367460   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:05.951955   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:05.967212   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:05.967270   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:06.003348   64655 cri.go:89] found id: ""
	I0717 01:23:06.003382   64655 logs.go:276] 0 containers: []
	W0717 01:23:06.003395   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:06.003405   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:06.003463   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:06.037684   64655 cri.go:89] found id: ""
	I0717 01:23:06.037728   64655 logs.go:276] 0 containers: []
	W0717 01:23:06.037748   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:06.037759   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:06.037821   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:06.073950   64655 cri.go:89] found id: ""
	I0717 01:23:06.073976   64655 logs.go:276] 0 containers: []
	W0717 01:23:06.073984   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:06.073990   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:06.074033   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:06.111696   64655 cri.go:89] found id: ""
	I0717 01:23:06.111735   64655 logs.go:276] 0 containers: []
	W0717 01:23:06.111746   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:06.111755   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:06.111819   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:06.152889   64655 cri.go:89] found id: ""
	I0717 01:23:06.152915   64655 logs.go:276] 0 containers: []
	W0717 01:23:06.152922   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:06.152929   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:06.152985   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:06.190207   64655 cri.go:89] found id: ""
	I0717 01:23:06.190231   64655 logs.go:276] 0 containers: []
	W0717 01:23:06.190239   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:06.190245   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:06.190307   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:06.227793   64655 cri.go:89] found id: ""
	I0717 01:23:06.227821   64655 logs.go:276] 0 containers: []
	W0717 01:23:06.227828   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:06.227838   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:06.227902   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:06.263229   64655 cri.go:89] found id: ""
	I0717 01:23:06.263257   64655 logs.go:276] 0 containers: []
	W0717 01:23:06.263265   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:06.263273   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:06.263289   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:06.336688   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:06.336712   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:06.336733   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:06.420660   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:06.420696   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:06.494828   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:06.494860   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:06.554159   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:06.554195   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:09.068393   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:09.081211   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:09.081278   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:09.120376   64655 cri.go:89] found id: ""
	I0717 01:23:09.120403   64655 logs.go:276] 0 containers: []
	W0717 01:23:09.120411   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:09.120416   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:09.120473   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:09.153889   64655 cri.go:89] found id: ""
	I0717 01:23:09.153915   64655 logs.go:276] 0 containers: []
	W0717 01:23:09.153925   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:09.153932   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:09.153991   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:09.188201   64655 cri.go:89] found id: ""
	I0717 01:23:09.188226   64655 logs.go:276] 0 containers: []
	W0717 01:23:09.188233   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:09.188239   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:09.188293   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:09.221266   64655 cri.go:89] found id: ""
	I0717 01:23:09.221293   64655 logs.go:276] 0 containers: []
	W0717 01:23:09.221300   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:09.221306   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:09.221353   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:09.254996   64655 cri.go:89] found id: ""
	I0717 01:23:09.255020   64655 logs.go:276] 0 containers: []
	W0717 01:23:09.255027   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:09.255032   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:09.255089   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:09.288863   64655 cri.go:89] found id: ""
	I0717 01:23:09.288900   64655 logs.go:276] 0 containers: []
	W0717 01:23:09.288910   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:09.288918   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:09.288979   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:09.322304   64655 cri.go:89] found id: ""
	I0717 01:23:09.322325   64655 logs.go:276] 0 containers: []
	W0717 01:23:09.322332   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:09.322337   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:09.322392   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:09.354533   64655 cri.go:89] found id: ""
	I0717 01:23:09.354553   64655 logs.go:276] 0 containers: []
	W0717 01:23:09.354560   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:09.354567   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:09.354581   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:09.402809   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:09.402838   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:09.416324   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:09.416349   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:09.483904   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:09.483921   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:09.483934   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:09.563081   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:09.563128   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:12.105204   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:12.120997   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:12.121058   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:12.159994   64655 cri.go:89] found id: ""
	I0717 01:23:12.160024   64655 logs.go:276] 0 containers: []
	W0717 01:23:12.160033   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:12.160044   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:12.160101   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:12.208701   64655 cri.go:89] found id: ""
	I0717 01:23:12.208730   64655 logs.go:276] 0 containers: []
	W0717 01:23:12.208738   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:12.208746   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:12.208799   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:12.267924   64655 cri.go:89] found id: ""
	I0717 01:23:12.267958   64655 logs.go:276] 0 containers: []
	W0717 01:23:12.267970   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:12.267979   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:12.268049   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:12.301952   64655 cri.go:89] found id: ""
	I0717 01:23:12.301974   64655 logs.go:276] 0 containers: []
	W0717 01:23:12.301981   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:12.301987   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:12.302036   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:12.334565   64655 cri.go:89] found id: ""
	I0717 01:23:12.334595   64655 logs.go:276] 0 containers: []
	W0717 01:23:12.334603   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:12.334609   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:12.334662   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:12.368128   64655 cri.go:89] found id: ""
	I0717 01:23:12.368155   64655 logs.go:276] 0 containers: []
	W0717 01:23:12.368164   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:12.368170   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:12.368263   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:12.402276   64655 cri.go:89] found id: ""
	I0717 01:23:12.402305   64655 logs.go:276] 0 containers: []
	W0717 01:23:12.402313   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:12.402318   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:12.402364   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:12.438254   64655 cri.go:89] found id: ""
	I0717 01:23:12.438280   64655 logs.go:276] 0 containers: []
	W0717 01:23:12.438287   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:12.438296   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:12.438312   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:12.510665   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:12.510697   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:12.510711   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:12.584656   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:12.584695   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:12.626920   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:12.626949   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:12.679877   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:12.679914   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:15.194061   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:15.208747   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:15.208811   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:15.244787   64655 cri.go:89] found id: ""
	I0717 01:23:15.244810   64655 logs.go:276] 0 containers: []
	W0717 01:23:15.244817   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:15.244825   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:15.244880   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:15.278281   64655 cri.go:89] found id: ""
	I0717 01:23:15.278307   64655 logs.go:276] 0 containers: []
	W0717 01:23:15.278320   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:15.278325   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:15.278381   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:15.314862   64655 cri.go:89] found id: ""
	I0717 01:23:15.314890   64655 logs.go:276] 0 containers: []
	W0717 01:23:15.314898   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:15.314904   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:15.314951   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:15.349123   64655 cri.go:89] found id: ""
	I0717 01:23:15.349147   64655 logs.go:276] 0 containers: []
	W0717 01:23:15.349165   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:15.349174   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:15.349230   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:15.380801   64655 cri.go:89] found id: ""
	I0717 01:23:15.380827   64655 logs.go:276] 0 containers: []
	W0717 01:23:15.380835   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:15.380841   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:15.380897   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:15.413279   64655 cri.go:89] found id: ""
	I0717 01:23:15.413307   64655 logs.go:276] 0 containers: []
	W0717 01:23:15.413318   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:15.413325   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:15.413387   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:15.445944   64655 cri.go:89] found id: ""
	I0717 01:23:15.445974   64655 logs.go:276] 0 containers: []
	W0717 01:23:15.445982   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:15.445988   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:15.446043   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:15.478474   64655 cri.go:89] found id: ""
	I0717 01:23:15.478502   64655 logs.go:276] 0 containers: []
	W0717 01:23:15.478511   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:15.478521   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:15.478535   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:15.554141   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:15.554179   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:15.591289   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:15.591314   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:15.644208   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:15.644243   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:15.657606   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:15.657633   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:15.722888   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:18.223162   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:18.238892   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:18.238963   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:18.285337   64655 cri.go:89] found id: ""
	I0717 01:23:18.285363   64655 logs.go:276] 0 containers: []
	W0717 01:23:18.285371   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:18.285377   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:18.285434   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:18.321065   64655 cri.go:89] found id: ""
	I0717 01:23:18.321092   64655 logs.go:276] 0 containers: []
	W0717 01:23:18.321103   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:18.321110   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:18.321171   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:18.356552   64655 cri.go:89] found id: ""
	I0717 01:23:18.356596   64655 logs.go:276] 0 containers: []
	W0717 01:23:18.356606   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:18.356614   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:18.356677   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:18.388716   64655 cri.go:89] found id: ""
	I0717 01:23:18.388745   64655 logs.go:276] 0 containers: []
	W0717 01:23:18.388755   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:18.388763   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:18.388822   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:18.421443   64655 cri.go:89] found id: ""
	I0717 01:23:18.421468   64655 logs.go:276] 0 containers: []
	W0717 01:23:18.421476   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:18.421482   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:18.421547   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:18.456722   64655 cri.go:89] found id: ""
	I0717 01:23:18.456748   64655 logs.go:276] 0 containers: []
	W0717 01:23:18.456759   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:18.456767   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:18.456829   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:18.493294   64655 cri.go:89] found id: ""
	I0717 01:23:18.493322   64655 logs.go:276] 0 containers: []
	W0717 01:23:18.493331   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:18.493340   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:18.493401   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:18.526447   64655 cri.go:89] found id: ""
	I0717 01:23:18.526472   64655 logs.go:276] 0 containers: []
	W0717 01:23:18.526482   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:18.526492   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:18.526508   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:18.577827   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:18.577860   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:18.591226   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:18.591251   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:18.656541   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:18.656578   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:18.656594   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:18.732877   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:18.732910   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:21.270327   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:21.285907   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:21.285992   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:21.323621   64655 cri.go:89] found id: ""
	I0717 01:23:21.323650   64655 logs.go:276] 0 containers: []
	W0717 01:23:21.323667   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:21.323673   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:21.323739   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:21.365026   64655 cri.go:89] found id: ""
	I0717 01:23:21.365054   64655 logs.go:276] 0 containers: []
	W0717 01:23:21.365063   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:21.365069   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:21.365133   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:21.401904   64655 cri.go:89] found id: ""
	I0717 01:23:21.401933   64655 logs.go:276] 0 containers: []
	W0717 01:23:21.401942   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:21.401948   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:21.402002   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:21.441695   64655 cri.go:89] found id: ""
	I0717 01:23:21.441722   64655 logs.go:276] 0 containers: []
	W0717 01:23:21.441731   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:21.441737   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:21.441801   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:21.476030   64655 cri.go:89] found id: ""
	I0717 01:23:21.476061   64655 logs.go:276] 0 containers: []
	W0717 01:23:21.476072   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:21.476080   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:21.476143   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:21.512023   64655 cri.go:89] found id: ""
	I0717 01:23:21.512051   64655 logs.go:276] 0 containers: []
	W0717 01:23:21.512061   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:21.512069   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:21.512130   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:21.549603   64655 cri.go:89] found id: ""
	I0717 01:23:21.549636   64655 logs.go:276] 0 containers: []
	W0717 01:23:21.549648   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:21.549658   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:21.549721   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:21.588321   64655 cri.go:89] found id: ""
	I0717 01:23:21.588350   64655 logs.go:276] 0 containers: []
	W0717 01:23:21.588360   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:21.588371   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:21.588388   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:21.601290   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:21.601317   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:21.669429   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:21.669452   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:21.669466   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:21.749466   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:21.749503   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:21.788009   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:21.788045   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:24.337785   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:24.352055   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:24.352148   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:24.387355   64655 cri.go:89] found id: ""
	I0717 01:23:24.387381   64655 logs.go:276] 0 containers: []
	W0717 01:23:24.387388   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:24.387394   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:24.387440   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:24.419004   64655 cri.go:89] found id: ""
	I0717 01:23:24.419034   64655 logs.go:276] 0 containers: []
	W0717 01:23:24.419044   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:24.419052   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:24.419116   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:24.453314   64655 cri.go:89] found id: ""
	I0717 01:23:24.453342   64655 logs.go:276] 0 containers: []
	W0717 01:23:24.453351   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:24.453357   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:24.453406   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:24.488312   64655 cri.go:89] found id: ""
	I0717 01:23:24.488340   64655 logs.go:276] 0 containers: []
	W0717 01:23:24.488349   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:24.488356   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:24.488405   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:24.524347   64655 cri.go:89] found id: ""
	I0717 01:23:24.524377   64655 logs.go:276] 0 containers: []
	W0717 01:23:24.524387   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:24.524394   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:24.524452   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:24.557683   64655 cri.go:89] found id: ""
	I0717 01:23:24.557724   64655 logs.go:276] 0 containers: []
	W0717 01:23:24.557733   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:24.557740   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:24.557801   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:24.592888   64655 cri.go:89] found id: ""
	I0717 01:23:24.592913   64655 logs.go:276] 0 containers: []
	W0717 01:23:24.592921   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:24.592926   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:24.592984   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:24.631205   64655 cri.go:89] found id: ""
	I0717 01:23:24.631234   64655 logs.go:276] 0 containers: []
	W0717 01:23:24.631246   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:24.631258   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:24.631273   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:24.682563   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:24.682594   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:24.696246   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:24.696305   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:24.764250   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:24.764276   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:24.764287   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:24.843972   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:24.844020   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:27.382440   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:27.395280   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:27.395339   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:27.426834   64655 cri.go:89] found id: ""
	I0717 01:23:27.426860   64655 logs.go:276] 0 containers: []
	W0717 01:23:27.426868   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:27.426873   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:27.426922   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:27.464177   64655 cri.go:89] found id: ""
	I0717 01:23:27.464205   64655 logs.go:276] 0 containers: []
	W0717 01:23:27.464212   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:27.464218   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:27.464271   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:27.506061   64655 cri.go:89] found id: ""
	I0717 01:23:27.506094   64655 logs.go:276] 0 containers: []
	W0717 01:23:27.506102   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:27.506109   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:27.506160   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:27.540213   64655 cri.go:89] found id: ""
	I0717 01:23:27.540239   64655 logs.go:276] 0 containers: []
	W0717 01:23:27.540250   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:27.540261   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:27.540321   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:27.576144   64655 cri.go:89] found id: ""
	I0717 01:23:27.576170   64655 logs.go:276] 0 containers: []
	W0717 01:23:27.576178   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:27.576185   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:27.576241   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:27.614738   64655 cri.go:89] found id: ""
	I0717 01:23:27.614771   64655 logs.go:276] 0 containers: []
	W0717 01:23:27.614783   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:27.614791   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:27.614856   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:27.648383   64655 cri.go:89] found id: ""
	I0717 01:23:27.648411   64655 logs.go:276] 0 containers: []
	W0717 01:23:27.648422   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:27.648435   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:27.648497   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:27.687859   64655 cri.go:89] found id: ""
	I0717 01:23:27.687890   64655 logs.go:276] 0 containers: []
	W0717 01:23:27.687903   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:27.687926   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:27.687952   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:27.739844   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:27.739880   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:27.753838   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:27.753865   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:27.826131   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:27.826156   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:27.826171   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:27.904628   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:27.904663   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:30.441964   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:30.455235   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:30.455298   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:30.490367   64655 cri.go:89] found id: ""
	I0717 01:23:30.490398   64655 logs.go:276] 0 containers: []
	W0717 01:23:30.490408   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:30.490415   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:30.490478   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:30.527910   64655 cri.go:89] found id: ""
	I0717 01:23:30.527941   64655 logs.go:276] 0 containers: []
	W0717 01:23:30.527952   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:30.527960   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:30.528011   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:30.564661   64655 cri.go:89] found id: ""
	I0717 01:23:30.564690   64655 logs.go:276] 0 containers: []
	W0717 01:23:30.564699   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:30.564705   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:30.564756   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:30.599451   64655 cri.go:89] found id: ""
	I0717 01:23:30.599476   64655 logs.go:276] 0 containers: []
	W0717 01:23:30.599486   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:30.599494   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:30.599551   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:30.632973   64655 cri.go:89] found id: ""
	I0717 01:23:30.632996   64655 logs.go:276] 0 containers: []
	W0717 01:23:30.633004   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:30.633011   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:30.633069   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:30.668091   64655 cri.go:89] found id: ""
	I0717 01:23:30.668117   64655 logs.go:276] 0 containers: []
	W0717 01:23:30.668126   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:30.668134   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:30.668200   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:30.701543   64655 cri.go:89] found id: ""
	I0717 01:23:30.701589   64655 logs.go:276] 0 containers: []
	W0717 01:23:30.701600   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:30.701608   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:30.701675   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:30.737013   64655 cri.go:89] found id: ""
	I0717 01:23:30.737043   64655 logs.go:276] 0 containers: []
	W0717 01:23:30.737053   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:30.737065   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:30.737080   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:30.788392   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:30.788423   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:30.802407   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:30.802437   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:30.872012   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:30.872030   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:30.872043   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:30.947147   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:30.947179   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:33.485916   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:33.499540   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:33.499607   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:33.536991   64655 cri.go:89] found id: ""
	I0717 01:23:33.537012   64655 logs.go:276] 0 containers: []
	W0717 01:23:33.537021   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:33.537028   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:33.537075   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:33.571065   64655 cri.go:89] found id: ""
	I0717 01:23:33.571090   64655 logs.go:276] 0 containers: []
	W0717 01:23:33.571099   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:33.571106   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:33.571169   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:33.607331   64655 cri.go:89] found id: ""
	I0717 01:23:33.607355   64655 logs.go:276] 0 containers: []
	W0717 01:23:33.607362   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:33.607368   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:33.607412   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:33.639612   64655 cri.go:89] found id: ""
	I0717 01:23:33.639640   64655 logs.go:276] 0 containers: []
	W0717 01:23:33.639647   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:33.639652   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:33.639706   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:33.674789   64655 cri.go:89] found id: ""
	I0717 01:23:33.674816   64655 logs.go:276] 0 containers: []
	W0717 01:23:33.674827   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:33.674834   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:33.674900   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:33.707506   64655 cri.go:89] found id: ""
	I0717 01:23:33.707541   64655 logs.go:276] 0 containers: []
	W0717 01:23:33.707552   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:33.707560   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:33.707619   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:33.742585   64655 cri.go:89] found id: ""
	I0717 01:23:33.742615   64655 logs.go:276] 0 containers: []
	W0717 01:23:33.742627   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:33.742634   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:33.742690   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:33.775686   64655 cri.go:89] found id: ""
	I0717 01:23:33.775719   64655 logs.go:276] 0 containers: []
	W0717 01:23:33.775729   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:33.775740   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:33.775754   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:33.788641   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:33.788666   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:33.854829   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:33.854852   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:33.854866   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:33.938677   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:33.938713   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:33.978472   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:33.978496   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:36.530692   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:36.543886   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:36.543955   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:36.585148   64655 cri.go:89] found id: ""
	I0717 01:23:36.585171   64655 logs.go:276] 0 containers: []
	W0717 01:23:36.585179   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:36.585184   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:36.585232   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:36.621553   64655 cri.go:89] found id: ""
	I0717 01:23:36.621581   64655 logs.go:276] 0 containers: []
	W0717 01:23:36.621589   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:36.621596   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:36.621647   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:36.662574   64655 cri.go:89] found id: ""
	I0717 01:23:36.662604   64655 logs.go:276] 0 containers: []
	W0717 01:23:36.662614   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:36.662620   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:36.662666   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:36.699083   64655 cri.go:89] found id: ""
	I0717 01:23:36.699111   64655 logs.go:276] 0 containers: []
	W0717 01:23:36.699122   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:36.699129   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:36.699193   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:36.732955   64655 cri.go:89] found id: ""
	I0717 01:23:36.732981   64655 logs.go:276] 0 containers: []
	W0717 01:23:36.732989   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:36.732995   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:36.733060   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:36.767672   64655 cri.go:89] found id: ""
	I0717 01:23:36.767698   64655 logs.go:276] 0 containers: []
	W0717 01:23:36.767713   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:36.767722   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:36.767783   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:36.803562   64655 cri.go:89] found id: ""
	I0717 01:23:36.803589   64655 logs.go:276] 0 containers: []
	W0717 01:23:36.803597   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:36.803602   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:36.803662   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:36.837522   64655 cri.go:89] found id: ""
	I0717 01:23:36.837555   64655 logs.go:276] 0 containers: []
	W0717 01:23:36.837565   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:36.837577   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:36.837590   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:36.888178   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:36.888208   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:36.901567   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:36.901593   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:36.974063   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:36.974091   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:36.974106   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:37.052648   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:37.052679   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:39.592686   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:39.606199   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:39.606260   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:39.640713   64655 cri.go:89] found id: ""
	I0717 01:23:39.640742   64655 logs.go:276] 0 containers: []
	W0717 01:23:39.640749   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:39.640755   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:39.640814   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:39.676315   64655 cri.go:89] found id: ""
	I0717 01:23:39.676340   64655 logs.go:276] 0 containers: []
	W0717 01:23:39.676351   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:39.676358   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:39.676415   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:39.718671   64655 cri.go:89] found id: ""
	I0717 01:23:39.718696   64655 logs.go:276] 0 containers: []
	W0717 01:23:39.718706   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:39.718714   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:39.718776   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:39.755620   64655 cri.go:89] found id: ""
	I0717 01:23:39.755644   64655 logs.go:276] 0 containers: []
	W0717 01:23:39.755652   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:39.755658   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:39.755719   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:39.787609   64655 cri.go:89] found id: ""
	I0717 01:23:39.787637   64655 logs.go:276] 0 containers: []
	W0717 01:23:39.787647   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:39.787655   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:39.787718   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:39.820538   64655 cri.go:89] found id: ""
	I0717 01:23:39.820576   64655 logs.go:276] 0 containers: []
	W0717 01:23:39.820587   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:39.820594   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:39.820653   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:39.854580   64655 cri.go:89] found id: ""
	I0717 01:23:39.854607   64655 logs.go:276] 0 containers: []
	W0717 01:23:39.854617   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:39.854625   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:39.854684   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:39.885995   64655 cri.go:89] found id: ""
	I0717 01:23:39.886021   64655 logs.go:276] 0 containers: []
	W0717 01:23:39.886032   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:39.886043   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:39.886057   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:39.934064   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:39.934097   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:39.948330   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:39.948363   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:40.016518   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:40.016539   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:40.016551   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:40.099167   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:40.099202   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:42.648023   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:42.660947   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:42.661024   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:42.696504   64655 cri.go:89] found id: ""
	I0717 01:23:42.696533   64655 logs.go:276] 0 containers: []
	W0717 01:23:42.696546   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:42.696568   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:42.696640   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:42.737923   64655 cri.go:89] found id: ""
	I0717 01:23:42.737950   64655 logs.go:276] 0 containers: []
	W0717 01:23:42.737961   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:42.737969   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:42.738038   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:42.770546   64655 cri.go:89] found id: ""
	I0717 01:23:42.770567   64655 logs.go:276] 0 containers: []
	W0717 01:23:42.770574   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:42.770580   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:42.770630   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:42.805269   64655 cri.go:89] found id: ""
	I0717 01:23:42.805301   64655 logs.go:276] 0 containers: []
	W0717 01:23:42.805312   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:42.805319   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:42.805381   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:42.838435   64655 cri.go:89] found id: ""
	I0717 01:23:42.838467   64655 logs.go:276] 0 containers: []
	W0717 01:23:42.838477   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:42.838484   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:42.838544   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:42.872200   64655 cri.go:89] found id: ""
	I0717 01:23:42.872225   64655 logs.go:276] 0 containers: []
	W0717 01:23:42.872231   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:42.872238   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:42.872286   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:42.908516   64655 cri.go:89] found id: ""
	I0717 01:23:42.908548   64655 logs.go:276] 0 containers: []
	W0717 01:23:42.908576   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:42.908584   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:42.908647   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:42.943523   64655 cri.go:89] found id: ""
	I0717 01:23:42.943552   64655 logs.go:276] 0 containers: []
	W0717 01:23:42.943560   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:42.943568   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:42.943579   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:42.995512   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:42.995543   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:43.009599   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:43.009627   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:43.074635   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:43.074660   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:43.074674   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:43.151144   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:43.151187   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:45.687963   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:45.701156   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:45.701217   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:45.735352   64655 cri.go:89] found id: ""
	I0717 01:23:45.735376   64655 logs.go:276] 0 containers: []
	W0717 01:23:45.735384   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:45.735390   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:45.735440   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:45.773913   64655 cri.go:89] found id: ""
	I0717 01:23:45.773937   64655 logs.go:276] 0 containers: []
	W0717 01:23:45.773945   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:45.773951   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:45.774000   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:45.807397   64655 cri.go:89] found id: ""
	I0717 01:23:45.807428   64655 logs.go:276] 0 containers: []
	W0717 01:23:45.807440   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:45.807448   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:45.807515   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:45.839372   64655 cri.go:89] found id: ""
	I0717 01:23:45.839398   64655 logs.go:276] 0 containers: []
	W0717 01:23:45.839406   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:45.839411   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:45.839460   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:45.873560   64655 cri.go:89] found id: ""
	I0717 01:23:45.873585   64655 logs.go:276] 0 containers: []
	W0717 01:23:45.873593   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:45.873603   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:45.873661   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:45.908819   64655 cri.go:89] found id: ""
	I0717 01:23:45.908849   64655 logs.go:276] 0 containers: []
	W0717 01:23:45.908858   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:45.908865   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:45.908913   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:45.948235   64655 cri.go:89] found id: ""
	I0717 01:23:45.948257   64655 logs.go:276] 0 containers: []
	W0717 01:23:45.948265   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:45.948272   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:45.948334   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:45.986864   64655 cri.go:89] found id: ""
	I0717 01:23:45.986888   64655 logs.go:276] 0 containers: []
	W0717 01:23:45.986895   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:45.986903   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:45.986915   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:46.042678   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:46.042707   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:46.094279   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:46.094319   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:46.108238   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:46.108264   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:46.178942   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:46.178963   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:46.178976   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:48.758301   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:48.771628   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:48.771692   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:48.806691   64655 cri.go:89] found id: ""
	I0717 01:23:48.806721   64655 logs.go:276] 0 containers: []
	W0717 01:23:48.806732   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:48.806740   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:48.806799   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:48.838545   64655 cri.go:89] found id: ""
	I0717 01:23:48.838573   64655 logs.go:276] 0 containers: []
	W0717 01:23:48.838582   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:48.838588   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:48.838638   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:48.869380   64655 cri.go:89] found id: ""
	I0717 01:23:48.869410   64655 logs.go:276] 0 containers: []
	W0717 01:23:48.869418   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:48.869423   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:48.869475   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:48.909363   64655 cri.go:89] found id: ""
	I0717 01:23:48.909389   64655 logs.go:276] 0 containers: []
	W0717 01:23:48.909398   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:48.909406   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:48.909463   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:48.943322   64655 cri.go:89] found id: ""
	I0717 01:23:48.943348   64655 logs.go:276] 0 containers: []
	W0717 01:23:48.943356   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:48.943362   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:48.943417   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:48.977742   64655 cri.go:89] found id: ""
	I0717 01:23:48.977768   64655 logs.go:276] 0 containers: []
	W0717 01:23:48.977782   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:48.977792   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:48.977862   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:49.011702   64655 cri.go:89] found id: ""
	I0717 01:23:49.011729   64655 logs.go:276] 0 containers: []
	W0717 01:23:49.011741   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:49.011747   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:49.011797   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:49.046390   64655 cri.go:89] found id: ""
	I0717 01:23:49.046416   64655 logs.go:276] 0 containers: []
	W0717 01:23:49.046424   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:49.046432   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:49.046447   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:49.083003   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:49.083034   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:49.132411   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:49.132445   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:49.146196   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:49.146227   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:49.207788   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:49.207811   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:49.207824   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:51.787003   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:51.800263   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:51.800336   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:51.838195   64655 cri.go:89] found id: ""
	I0717 01:23:51.838221   64655 logs.go:276] 0 containers: []
	W0717 01:23:51.838228   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:51.838234   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:51.838280   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:51.873589   64655 cri.go:89] found id: ""
	I0717 01:23:51.873616   64655 logs.go:276] 0 containers: []
	W0717 01:23:51.873624   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:51.873629   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:51.873683   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:51.906253   64655 cri.go:89] found id: ""
	I0717 01:23:51.906276   64655 logs.go:276] 0 containers: []
	W0717 01:23:51.906284   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:51.906290   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:51.906340   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:51.941339   64655 cri.go:89] found id: ""
	I0717 01:23:51.941366   64655 logs.go:276] 0 containers: []
	W0717 01:23:51.941374   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:51.941380   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:51.941429   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:51.979003   64655 cri.go:89] found id: ""
	I0717 01:23:51.979028   64655 logs.go:276] 0 containers: []
	W0717 01:23:51.979036   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:51.979046   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:51.979093   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:52.012663   64655 cri.go:89] found id: ""
	I0717 01:23:52.012689   64655 logs.go:276] 0 containers: []
	W0717 01:23:52.012696   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:52.012706   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:52.012758   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:52.046918   64655 cri.go:89] found id: ""
	I0717 01:23:52.046942   64655 logs.go:276] 0 containers: []
	W0717 01:23:52.046949   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:52.046958   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:52.047007   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:52.080326   64655 cri.go:89] found id: ""
	I0717 01:23:52.080349   64655 logs.go:276] 0 containers: []
	W0717 01:23:52.080357   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:52.080366   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:52.080377   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:52.131208   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:52.131250   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:52.144808   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:52.144851   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:52.213543   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:52.213565   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:52.213581   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:52.290548   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:52.290584   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:54.829587   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:54.842294   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:54.842351   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:54.874729   64655 cri.go:89] found id: ""
	I0717 01:23:54.874755   64655 logs.go:276] 0 containers: []
	W0717 01:23:54.874763   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:54.874773   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:54.874821   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:54.909306   64655 cri.go:89] found id: ""
	I0717 01:23:54.909330   64655 logs.go:276] 0 containers: []
	W0717 01:23:54.909338   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:54.909343   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:54.909394   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:54.942122   64655 cri.go:89] found id: ""
	I0717 01:23:54.942151   64655 logs.go:276] 0 containers: []
	W0717 01:23:54.942160   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:54.942167   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:54.942218   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:54.975551   64655 cri.go:89] found id: ""
	I0717 01:23:54.975582   64655 logs.go:276] 0 containers: []
	W0717 01:23:54.975594   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:54.975602   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:54.975658   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:55.008324   64655 cri.go:89] found id: ""
	I0717 01:23:55.008368   64655 logs.go:276] 0 containers: []
	W0717 01:23:55.008381   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:55.008389   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:55.008457   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:55.040885   64655 cri.go:89] found id: ""
	I0717 01:23:55.040914   64655 logs.go:276] 0 containers: []
	W0717 01:23:55.040921   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:55.040928   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:55.040980   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:55.074804   64655 cri.go:89] found id: ""
	I0717 01:23:55.074833   64655 logs.go:276] 0 containers: []
	W0717 01:23:55.074842   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:55.074848   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:55.074907   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:55.108058   64655 cri.go:89] found id: ""
	I0717 01:23:55.108086   64655 logs.go:276] 0 containers: []
	W0717 01:23:55.108095   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:55.108104   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:55.108117   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:23:55.121693   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:55.121722   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:55.183484   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:55.183504   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:55.183517   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:55.255754   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:55.255793   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:55.293826   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:55.293854   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:57.844675   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:23:57.857492   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:23:57.857566   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:23:57.903832   64655 cri.go:89] found id: ""
	I0717 01:23:57.903863   64655 logs.go:276] 0 containers: []
	W0717 01:23:57.903881   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:23:57.903890   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:23:57.903948   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:23:57.943125   64655 cri.go:89] found id: ""
	I0717 01:23:57.943157   64655 logs.go:276] 0 containers: []
	W0717 01:23:57.943167   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:23:57.943175   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:23:57.943226   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:23:57.979568   64655 cri.go:89] found id: ""
	I0717 01:23:57.979594   64655 logs.go:276] 0 containers: []
	W0717 01:23:57.979601   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:23:57.979607   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:23:57.979665   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:23:58.015996   64655 cri.go:89] found id: ""
	I0717 01:23:58.016023   64655 logs.go:276] 0 containers: []
	W0717 01:23:58.016031   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:23:58.016037   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:23:58.016089   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:23:58.048903   64655 cri.go:89] found id: ""
	I0717 01:23:58.048927   64655 logs.go:276] 0 containers: []
	W0717 01:23:58.048935   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:23:58.048948   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:23:58.048993   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:23:58.080907   64655 cri.go:89] found id: ""
	I0717 01:23:58.080935   64655 logs.go:276] 0 containers: []
	W0717 01:23:58.080945   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:23:58.080951   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:23:58.080999   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:23:58.113470   64655 cri.go:89] found id: ""
	I0717 01:23:58.113501   64655 logs.go:276] 0 containers: []
	W0717 01:23:58.113512   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:23:58.113518   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:23:58.113579   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:23:58.148064   64655 cri.go:89] found id: ""
	I0717 01:23:58.148095   64655 logs.go:276] 0 containers: []
	W0717 01:23:58.148111   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:23:58.148123   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:23:58.148141   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:23:58.245047   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:23:58.245072   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:23:58.245098   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:23:58.326394   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:23:58.326434   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:23:58.367710   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:23:58.367742   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:23:58.418319   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:23:58.418351   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:00.933161   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:00.946209   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:00.946265   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:00.979255   64655 cri.go:89] found id: ""
	I0717 01:24:00.979278   64655 logs.go:276] 0 containers: []
	W0717 01:24:00.979286   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:00.979291   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:00.979342   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:01.011754   64655 cri.go:89] found id: ""
	I0717 01:24:01.011779   64655 logs.go:276] 0 containers: []
	W0717 01:24:01.011786   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:01.011792   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:01.011842   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:01.045643   64655 cri.go:89] found id: ""
	I0717 01:24:01.045669   64655 logs.go:276] 0 containers: []
	W0717 01:24:01.045680   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:01.045687   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:01.045749   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:01.078581   64655 cri.go:89] found id: ""
	I0717 01:24:01.078620   64655 logs.go:276] 0 containers: []
	W0717 01:24:01.078639   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:01.078647   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:01.078704   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:01.114268   64655 cri.go:89] found id: ""
	I0717 01:24:01.114299   64655 logs.go:276] 0 containers: []
	W0717 01:24:01.114307   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:01.114313   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:01.114378   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:01.152686   64655 cri.go:89] found id: ""
	I0717 01:24:01.152711   64655 logs.go:276] 0 containers: []
	W0717 01:24:01.152720   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:01.152728   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:01.152789   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:01.188023   64655 cri.go:89] found id: ""
	I0717 01:24:01.188056   64655 logs.go:276] 0 containers: []
	W0717 01:24:01.188065   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:01.188072   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:01.188125   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:01.221155   64655 cri.go:89] found id: ""
	I0717 01:24:01.221183   64655 logs.go:276] 0 containers: []
	W0717 01:24:01.221192   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:01.221201   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:01.221212   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:01.262028   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:01.262063   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:01.311473   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:01.311505   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:01.326286   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:01.326317   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:01.396226   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:01.396249   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:01.396261   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:03.980104   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:04.010050   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:04.010123   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:04.043887   64655 cri.go:89] found id: ""
	I0717 01:24:04.043921   64655 logs.go:276] 0 containers: []
	W0717 01:24:04.043932   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:04.043941   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:04.043999   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:04.080629   64655 cri.go:89] found id: ""
	I0717 01:24:04.080653   64655 logs.go:276] 0 containers: []
	W0717 01:24:04.080660   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:04.080666   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:04.080711   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:04.113759   64655 cri.go:89] found id: ""
	I0717 01:24:04.113788   64655 logs.go:276] 0 containers: []
	W0717 01:24:04.113799   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:04.113808   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:04.113868   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:04.148419   64655 cri.go:89] found id: ""
	I0717 01:24:04.148448   64655 logs.go:276] 0 containers: []
	W0717 01:24:04.148459   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:04.148467   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:04.148528   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:04.182302   64655 cri.go:89] found id: ""
	I0717 01:24:04.182323   64655 logs.go:276] 0 containers: []
	W0717 01:24:04.182330   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:04.182337   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:04.182384   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:04.216310   64655 cri.go:89] found id: ""
	I0717 01:24:04.216335   64655 logs.go:276] 0 containers: []
	W0717 01:24:04.216345   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:04.216352   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:04.216416   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:04.247602   64655 cri.go:89] found id: ""
	I0717 01:24:04.247628   64655 logs.go:276] 0 containers: []
	W0717 01:24:04.247636   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:04.247641   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:04.247700   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:04.281270   64655 cri.go:89] found id: ""
	I0717 01:24:04.281299   64655 logs.go:276] 0 containers: []
	W0717 01:24:04.281307   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:04.281316   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:04.281328   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:04.318010   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:04.318039   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:04.369477   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:04.369515   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:04.382938   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:04.382965   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:04.449412   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:04.449442   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:04.449458   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:07.032476   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:07.047717   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:07.047789   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:07.084721   64655 cri.go:89] found id: ""
	I0717 01:24:07.084745   64655 logs.go:276] 0 containers: []
	W0717 01:24:07.084755   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:07.084762   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:07.084824   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:07.122402   64655 cri.go:89] found id: ""
	I0717 01:24:07.122429   64655 logs.go:276] 0 containers: []
	W0717 01:24:07.122437   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:07.122442   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:07.122489   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:07.154859   64655 cri.go:89] found id: ""
	I0717 01:24:07.154888   64655 logs.go:276] 0 containers: []
	W0717 01:24:07.154898   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:07.154905   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:07.154970   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:07.191056   64655 cri.go:89] found id: ""
	I0717 01:24:07.191082   64655 logs.go:276] 0 containers: []
	W0717 01:24:07.191089   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:07.191095   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:07.191148   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:07.224911   64655 cri.go:89] found id: ""
	I0717 01:24:07.224939   64655 logs.go:276] 0 containers: []
	W0717 01:24:07.224950   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:07.224964   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:07.225033   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:07.256291   64655 cri.go:89] found id: ""
	I0717 01:24:07.256318   64655 logs.go:276] 0 containers: []
	W0717 01:24:07.256329   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:07.256336   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:07.256406   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:07.289673   64655 cri.go:89] found id: ""
	I0717 01:24:07.289710   64655 logs.go:276] 0 containers: []
	W0717 01:24:07.289723   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:07.289732   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:07.289790   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:07.325981   64655 cri.go:89] found id: ""
	I0717 01:24:07.326012   64655 logs.go:276] 0 containers: []
	W0717 01:24:07.326023   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:07.326035   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:07.326049   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:07.376240   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:07.376274   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:07.389850   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:07.389872   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:07.464049   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:07.464077   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:07.464096   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:07.540146   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:07.540182   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:10.079767   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:10.093021   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:10.093098   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:10.128033   64655 cri.go:89] found id: ""
	I0717 01:24:10.128062   64655 logs.go:276] 0 containers: []
	W0717 01:24:10.128073   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:10.128082   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:10.128146   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:10.164864   64655 cri.go:89] found id: ""
	I0717 01:24:10.164890   64655 logs.go:276] 0 containers: []
	W0717 01:24:10.164900   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:10.164909   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:10.164961   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:10.204486   64655 cri.go:89] found id: ""
	I0717 01:24:10.204510   64655 logs.go:276] 0 containers: []
	W0717 01:24:10.204521   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:10.204528   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:10.204608   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:10.242509   64655 cri.go:89] found id: ""
	I0717 01:24:10.242543   64655 logs.go:276] 0 containers: []
	W0717 01:24:10.242555   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:10.242563   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:10.242611   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:10.278470   64655 cri.go:89] found id: ""
	I0717 01:24:10.278495   64655 logs.go:276] 0 containers: []
	W0717 01:24:10.278505   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:10.278513   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:10.278572   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:10.314783   64655 cri.go:89] found id: ""
	I0717 01:24:10.314809   64655 logs.go:276] 0 containers: []
	W0717 01:24:10.314816   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:10.314823   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:10.314882   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:10.348697   64655 cri.go:89] found id: ""
	I0717 01:24:10.348726   64655 logs.go:276] 0 containers: []
	W0717 01:24:10.348736   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:10.348744   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:10.348804   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:10.386250   64655 cri.go:89] found id: ""
	I0717 01:24:10.386278   64655 logs.go:276] 0 containers: []
	W0717 01:24:10.386286   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:10.386295   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:10.386304   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:10.440626   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:10.440662   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:10.454508   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:10.454538   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:10.523097   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:10.523122   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:10.523137   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:10.607771   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:10.607798   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:13.145216   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:13.161627   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:13.161683   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:13.198629   64655 cri.go:89] found id: ""
	I0717 01:24:13.198661   64655 logs.go:276] 0 containers: []
	W0717 01:24:13.198672   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:13.198682   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:13.198748   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:13.240233   64655 cri.go:89] found id: ""
	I0717 01:24:13.240262   64655 logs.go:276] 0 containers: []
	W0717 01:24:13.240272   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:13.240279   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:13.240343   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:13.274362   64655 cri.go:89] found id: ""
	I0717 01:24:13.274395   64655 logs.go:276] 0 containers: []
	W0717 01:24:13.274405   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:13.274413   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:13.274471   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:13.307449   64655 cri.go:89] found id: ""
	I0717 01:24:13.307476   64655 logs.go:276] 0 containers: []
	W0717 01:24:13.307484   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:13.307489   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:13.307542   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:13.341633   64655 cri.go:89] found id: ""
	I0717 01:24:13.341657   64655 logs.go:276] 0 containers: []
	W0717 01:24:13.341665   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:13.341670   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:13.341719   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:13.374712   64655 cri.go:89] found id: ""
	I0717 01:24:13.374743   64655 logs.go:276] 0 containers: []
	W0717 01:24:13.374753   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:13.374761   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:13.374818   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:13.409766   64655 cri.go:89] found id: ""
	I0717 01:24:13.409798   64655 logs.go:276] 0 containers: []
	W0717 01:24:13.409807   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:13.409813   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:13.409879   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:13.444407   64655 cri.go:89] found id: ""
	I0717 01:24:13.444438   64655 logs.go:276] 0 containers: []
	W0717 01:24:13.444446   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:13.444454   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:13.444469   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:13.524450   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:13.524486   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:13.560747   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:13.560789   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:13.617301   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:13.617331   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:13.630788   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:13.630817   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:13.697926   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:16.198431   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:16.212230   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:16.212293   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:16.247428   64655 cri.go:89] found id: ""
	I0717 01:24:16.247465   64655 logs.go:276] 0 containers: []
	W0717 01:24:16.247477   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:16.247486   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:16.247554   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:16.282437   64655 cri.go:89] found id: ""
	I0717 01:24:16.282468   64655 logs.go:276] 0 containers: []
	W0717 01:24:16.282477   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:16.282483   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:16.282534   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:16.316121   64655 cri.go:89] found id: ""
	I0717 01:24:16.316146   64655 logs.go:276] 0 containers: []
	W0717 01:24:16.316154   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:16.316161   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:16.316208   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:16.349266   64655 cri.go:89] found id: ""
	I0717 01:24:16.349294   64655 logs.go:276] 0 containers: []
	W0717 01:24:16.349302   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:16.349308   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:16.349354   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:16.383607   64655 cri.go:89] found id: ""
	I0717 01:24:16.383629   64655 logs.go:276] 0 containers: []
	W0717 01:24:16.383636   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:16.383641   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:16.383689   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:16.416813   64655 cri.go:89] found id: ""
	I0717 01:24:16.416838   64655 logs.go:276] 0 containers: []
	W0717 01:24:16.416846   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:16.416852   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:16.416904   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:16.450210   64655 cri.go:89] found id: ""
	I0717 01:24:16.450238   64655 logs.go:276] 0 containers: []
	W0717 01:24:16.450245   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:16.450251   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:16.450304   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:16.484680   64655 cri.go:89] found id: ""
	I0717 01:24:16.484706   64655 logs.go:276] 0 containers: []
	W0717 01:24:16.484715   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:16.484726   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:16.484737   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:16.542761   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:16.542796   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:16.557147   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:16.557178   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:16.623752   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:16.623776   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:16.623792   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:16.698210   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:16.698245   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:19.238833   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:19.252216   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:19.252282   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:19.286724   64655 cri.go:89] found id: ""
	I0717 01:24:19.286761   64655 logs.go:276] 0 containers: []
	W0717 01:24:19.286774   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:19.286783   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:19.286853   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:19.320689   64655 cri.go:89] found id: ""
	I0717 01:24:19.320717   64655 logs.go:276] 0 containers: []
	W0717 01:24:19.320732   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:19.320739   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:19.320806   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:19.352989   64655 cri.go:89] found id: ""
	I0717 01:24:19.353016   64655 logs.go:276] 0 containers: []
	W0717 01:24:19.353027   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:19.353036   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:19.353101   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:19.386742   64655 cri.go:89] found id: ""
	I0717 01:24:19.386770   64655 logs.go:276] 0 containers: []
	W0717 01:24:19.386780   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:19.386789   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:19.386859   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:19.420785   64655 cri.go:89] found id: ""
	I0717 01:24:19.420821   64655 logs.go:276] 0 containers: []
	W0717 01:24:19.420833   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:19.420842   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:19.420911   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:19.456406   64655 cri.go:89] found id: ""
	I0717 01:24:19.456441   64655 logs.go:276] 0 containers: []
	W0717 01:24:19.456452   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:19.456460   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:19.456516   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:19.490620   64655 cri.go:89] found id: ""
	I0717 01:24:19.490651   64655 logs.go:276] 0 containers: []
	W0717 01:24:19.490660   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:19.490666   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:19.490721   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:19.524674   64655 cri.go:89] found id: ""
	I0717 01:24:19.524701   64655 logs.go:276] 0 containers: []
	W0717 01:24:19.524710   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:19.524718   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:19.524731   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:19.537881   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:19.537910   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:19.605722   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:19.605744   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:19.605756   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:19.691560   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:19.691606   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:19.727445   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:19.727485   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:22.279486   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:22.294628   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:22.294706   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:22.327969   64655 cri.go:89] found id: ""
	I0717 01:24:22.327999   64655 logs.go:276] 0 containers: []
	W0717 01:24:22.328006   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:22.328012   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:22.328061   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:22.361286   64655 cri.go:89] found id: ""
	I0717 01:24:22.361319   64655 logs.go:276] 0 containers: []
	W0717 01:24:22.361329   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:22.361337   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:22.361395   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:22.394659   64655 cri.go:89] found id: ""
	I0717 01:24:22.394685   64655 logs.go:276] 0 containers: []
	W0717 01:24:22.394697   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:22.394704   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:22.394763   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:22.425617   64655 cri.go:89] found id: ""
	I0717 01:24:22.425646   64655 logs.go:276] 0 containers: []
	W0717 01:24:22.425654   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:22.425660   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:22.425709   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:22.460965   64655 cri.go:89] found id: ""
	I0717 01:24:22.460990   64655 logs.go:276] 0 containers: []
	W0717 01:24:22.460998   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:22.461003   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:22.461052   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:22.495330   64655 cri.go:89] found id: ""
	I0717 01:24:22.495360   64655 logs.go:276] 0 containers: []
	W0717 01:24:22.495373   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:22.495382   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:22.495446   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:22.533879   64655 cri.go:89] found id: ""
	I0717 01:24:22.533908   64655 logs.go:276] 0 containers: []
	W0717 01:24:22.533915   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:22.533920   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:22.533982   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:22.569460   64655 cri.go:89] found id: ""
	I0717 01:24:22.569493   64655 logs.go:276] 0 containers: []
	W0717 01:24:22.569503   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:22.569515   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:22.569529   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:22.626414   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:22.626452   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:22.639540   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:22.639569   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:22.709608   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:22.709632   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:22.709646   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:22.801206   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:22.801245   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:25.340739   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:25.353868   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:25.353935   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:25.387710   64655 cri.go:89] found id: ""
	I0717 01:24:25.387746   64655 logs.go:276] 0 containers: []
	W0717 01:24:25.387757   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:25.387765   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:25.387826   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:25.426036   64655 cri.go:89] found id: ""
	I0717 01:24:25.426066   64655 logs.go:276] 0 containers: []
	W0717 01:24:25.426073   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:25.426078   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:25.426126   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:25.463733   64655 cri.go:89] found id: ""
	I0717 01:24:25.463764   64655 logs.go:276] 0 containers: []
	W0717 01:24:25.463775   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:25.463782   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:25.463845   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:25.497185   64655 cri.go:89] found id: ""
	I0717 01:24:25.497215   64655 logs.go:276] 0 containers: []
	W0717 01:24:25.497225   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:25.497233   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:25.497292   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:25.530824   64655 cri.go:89] found id: ""
	I0717 01:24:25.530853   64655 logs.go:276] 0 containers: []
	W0717 01:24:25.530870   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:25.530876   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:25.530927   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:25.570732   64655 cri.go:89] found id: ""
	I0717 01:24:25.570765   64655 logs.go:276] 0 containers: []
	W0717 01:24:25.570775   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:25.570784   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:25.570846   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:25.602447   64655 cri.go:89] found id: ""
	I0717 01:24:25.602469   64655 logs.go:276] 0 containers: []
	W0717 01:24:25.602477   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:25.602482   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:25.602528   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:25.641421   64655 cri.go:89] found id: ""
	I0717 01:24:25.641456   64655 logs.go:276] 0 containers: []
	W0717 01:24:25.641467   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:25.641478   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:25.641493   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:25.728503   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:25.728548   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:25.766400   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:25.766431   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:25.817091   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:25.817134   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:25.830700   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:25.830728   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:25.906902   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:28.407315   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:28.420875   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:28.420931   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:28.457275   64655 cri.go:89] found id: ""
	I0717 01:24:28.457300   64655 logs.go:276] 0 containers: []
	W0717 01:24:28.457307   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:28.457316   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:28.457372   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:28.494551   64655 cri.go:89] found id: ""
	I0717 01:24:28.494582   64655 logs.go:276] 0 containers: []
	W0717 01:24:28.494594   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:28.494602   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:28.494666   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:28.526977   64655 cri.go:89] found id: ""
	I0717 01:24:28.527011   64655 logs.go:276] 0 containers: []
	W0717 01:24:28.527021   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:28.527027   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:28.527094   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:28.560063   64655 cri.go:89] found id: ""
	I0717 01:24:28.560087   64655 logs.go:276] 0 containers: []
	W0717 01:24:28.560094   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:28.560100   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:28.560147   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:28.593027   64655 cri.go:89] found id: ""
	I0717 01:24:28.593054   64655 logs.go:276] 0 containers: []
	W0717 01:24:28.593061   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:28.593067   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:28.593122   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:28.625167   64655 cri.go:89] found id: ""
	I0717 01:24:28.625199   64655 logs.go:276] 0 containers: []
	W0717 01:24:28.625206   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:28.625213   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:28.625260   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:28.661661   64655 cri.go:89] found id: ""
	I0717 01:24:28.661684   64655 logs.go:276] 0 containers: []
	W0717 01:24:28.661691   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:28.661697   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:28.661740   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:28.694903   64655 cri.go:89] found id: ""
	I0717 01:24:28.694936   64655 logs.go:276] 0 containers: []
	W0717 01:24:28.694945   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:28.694953   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:28.694966   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:28.734697   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:28.734728   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:28.786008   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:28.786041   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:28.800060   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:28.800091   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:28.866372   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:28.866397   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:28.866413   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:31.453240   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:31.466274   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:31.466355   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:31.506492   64655 cri.go:89] found id: ""
	I0717 01:24:31.506516   64655 logs.go:276] 0 containers: []
	W0717 01:24:31.506524   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:31.506530   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:31.506580   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:31.545134   64655 cri.go:89] found id: ""
	I0717 01:24:31.545169   64655 logs.go:276] 0 containers: []
	W0717 01:24:31.545179   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:31.545186   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:31.545243   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:31.582074   64655 cri.go:89] found id: ""
	I0717 01:24:31.582102   64655 logs.go:276] 0 containers: []
	W0717 01:24:31.582114   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:31.582121   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:31.582189   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:31.615258   64655 cri.go:89] found id: ""
	I0717 01:24:31.615285   64655 logs.go:276] 0 containers: []
	W0717 01:24:31.615296   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:31.615303   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:31.615360   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:31.650101   64655 cri.go:89] found id: ""
	I0717 01:24:31.650130   64655 logs.go:276] 0 containers: []
	W0717 01:24:31.650140   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:31.650148   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:31.650206   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:31.683638   64655 cri.go:89] found id: ""
	I0717 01:24:31.683670   64655 logs.go:276] 0 containers: []
	W0717 01:24:31.683681   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:31.683689   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:31.683749   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:31.717438   64655 cri.go:89] found id: ""
	I0717 01:24:31.717470   64655 logs.go:276] 0 containers: []
	W0717 01:24:31.717480   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:31.717488   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:31.717554   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:31.752287   64655 cri.go:89] found id: ""
	I0717 01:24:31.752313   64655 logs.go:276] 0 containers: []
	W0717 01:24:31.752323   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:31.752333   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:31.752347   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:31.833807   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:31.833852   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:31.875094   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:31.875118   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:31.925384   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:31.925414   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:31.938575   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:31.938601   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:32.020114   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:34.521226   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:34.534444   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:34.534506   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:34.569203   64655 cri.go:89] found id: ""
	I0717 01:24:34.569232   64655 logs.go:276] 0 containers: []
	W0717 01:24:34.569243   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:34.569251   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:34.569315   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:34.602002   64655 cri.go:89] found id: ""
	I0717 01:24:34.602033   64655 logs.go:276] 0 containers: []
	W0717 01:24:34.602042   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:34.602048   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:34.602099   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:34.635657   64655 cri.go:89] found id: ""
	I0717 01:24:34.635685   64655 logs.go:276] 0 containers: []
	W0717 01:24:34.635696   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:34.635702   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:34.635748   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:34.670432   64655 cri.go:89] found id: ""
	I0717 01:24:34.670460   64655 logs.go:276] 0 containers: []
	W0717 01:24:34.670470   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:34.670482   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:34.670528   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:34.707505   64655 cri.go:89] found id: ""
	I0717 01:24:34.707529   64655 logs.go:276] 0 containers: []
	W0717 01:24:34.707536   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:34.707542   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:34.707594   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:34.740932   64655 cri.go:89] found id: ""
	I0717 01:24:34.740956   64655 logs.go:276] 0 containers: []
	W0717 01:24:34.740964   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:34.740970   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:34.741018   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:34.775555   64655 cri.go:89] found id: ""
	I0717 01:24:34.775588   64655 logs.go:276] 0 containers: []
	W0717 01:24:34.775598   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:34.775605   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:34.775667   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:34.810087   64655 cri.go:89] found id: ""
	I0717 01:24:34.810124   64655 logs.go:276] 0 containers: []
	W0717 01:24:34.810135   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:34.810147   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:34.810163   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:34.860624   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:34.860652   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:34.919282   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:34.919321   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:34.936005   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:34.936035   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:35.004453   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:35.004480   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:35.004495   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:37.591715   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:37.605012   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:37.605069   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:37.644624   64655 cri.go:89] found id: ""
	I0717 01:24:37.644665   64655 logs.go:276] 0 containers: []
	W0717 01:24:37.644677   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:37.644684   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:37.644738   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:37.680001   64655 cri.go:89] found id: ""
	I0717 01:24:37.680031   64655 logs.go:276] 0 containers: []
	W0717 01:24:37.680042   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:37.680047   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:37.680108   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:37.716138   64655 cri.go:89] found id: ""
	I0717 01:24:37.716177   64655 logs.go:276] 0 containers: []
	W0717 01:24:37.716188   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:37.716196   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:37.716266   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:37.759314   64655 cri.go:89] found id: ""
	I0717 01:24:37.759357   64655 logs.go:276] 0 containers: []
	W0717 01:24:37.759370   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:37.759380   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:37.759449   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:37.798096   64655 cri.go:89] found id: ""
	I0717 01:24:37.798129   64655 logs.go:276] 0 containers: []
	W0717 01:24:37.798140   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:37.798147   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:37.798217   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:37.831107   64655 cri.go:89] found id: ""
	I0717 01:24:37.831133   64655 logs.go:276] 0 containers: []
	W0717 01:24:37.831141   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:37.831147   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:37.831206   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:37.869734   64655 cri.go:89] found id: ""
	I0717 01:24:37.869768   64655 logs.go:276] 0 containers: []
	W0717 01:24:37.869776   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:37.869782   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:37.869831   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:37.904456   64655 cri.go:89] found id: ""
	I0717 01:24:37.904480   64655 logs.go:276] 0 containers: []
	W0717 01:24:37.904487   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:37.904495   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:37.904506   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:37.942364   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:37.942391   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:37.992861   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:37.992892   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:38.006878   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:38.006902   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:38.076752   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:38.076777   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:38.076792   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:40.653943   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:40.667045   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:40.667105   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:40.699850   64655 cri.go:89] found id: ""
	I0717 01:24:40.699876   64655 logs.go:276] 0 containers: []
	W0717 01:24:40.699883   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:40.699889   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:40.699933   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:40.732528   64655 cri.go:89] found id: ""
	I0717 01:24:40.732569   64655 logs.go:276] 0 containers: []
	W0717 01:24:40.732580   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:40.732588   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:40.732646   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:40.768169   64655 cri.go:89] found id: ""
	I0717 01:24:40.768195   64655 logs.go:276] 0 containers: []
	W0717 01:24:40.768204   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:40.768211   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:40.768278   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:40.799996   64655 cri.go:89] found id: ""
	I0717 01:24:40.800025   64655 logs.go:276] 0 containers: []
	W0717 01:24:40.800036   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:40.800044   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:40.800112   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:40.834053   64655 cri.go:89] found id: ""
	I0717 01:24:40.834080   64655 logs.go:276] 0 containers: []
	W0717 01:24:40.834099   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:40.834106   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:40.834166   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:40.871290   64655 cri.go:89] found id: ""
	I0717 01:24:40.871315   64655 logs.go:276] 0 containers: []
	W0717 01:24:40.871322   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:40.871328   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:40.871377   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:40.903911   64655 cri.go:89] found id: ""
	I0717 01:24:40.903930   64655 logs.go:276] 0 containers: []
	W0717 01:24:40.903936   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:40.903941   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:40.903985   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:40.937040   64655 cri.go:89] found id: ""
	I0717 01:24:40.937075   64655 logs.go:276] 0 containers: []
	W0717 01:24:40.937084   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:40.937093   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:40.937104   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:40.987071   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:40.987103   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:41.002321   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:41.002346   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:41.070253   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:41.070274   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:41.070288   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:41.146768   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:41.146803   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:43.686214   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:43.698658   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:43.698726   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:43.732792   64655 cri.go:89] found id: ""
	I0717 01:24:43.732821   64655 logs.go:276] 0 containers: []
	W0717 01:24:43.732831   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:43.732838   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:43.732903   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:43.765815   64655 cri.go:89] found id: ""
	I0717 01:24:43.765849   64655 logs.go:276] 0 containers: []
	W0717 01:24:43.765858   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:43.765864   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:43.765917   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:43.799645   64655 cri.go:89] found id: ""
	I0717 01:24:43.799671   64655 logs.go:276] 0 containers: []
	W0717 01:24:43.799679   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:43.799687   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:43.799746   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:43.832920   64655 cri.go:89] found id: ""
	I0717 01:24:43.832946   64655 logs.go:276] 0 containers: []
	W0717 01:24:43.832954   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:43.832959   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:43.833007   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:43.864934   64655 cri.go:89] found id: ""
	I0717 01:24:43.864959   64655 logs.go:276] 0 containers: []
	W0717 01:24:43.864970   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:43.864978   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:43.865032   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:43.898766   64655 cri.go:89] found id: ""
	I0717 01:24:43.898788   64655 logs.go:276] 0 containers: []
	W0717 01:24:43.898796   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:43.898801   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:43.898846   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:43.931284   64655 cri.go:89] found id: ""
	I0717 01:24:43.931310   64655 logs.go:276] 0 containers: []
	W0717 01:24:43.931318   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:43.931324   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:43.931370   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:43.963646   64655 cri.go:89] found id: ""
	I0717 01:24:43.963670   64655 logs.go:276] 0 containers: []
	W0717 01:24:43.963677   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:43.963687   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:43.963701   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:44.002648   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:44.002679   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:44.052173   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:44.052206   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:44.066926   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:44.066952   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:44.131369   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:44.131388   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:44.131400   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:46.731672   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:46.744602   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:46.744675   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:46.779571   64655 cri.go:89] found id: ""
	I0717 01:24:46.779594   64655 logs.go:276] 0 containers: []
	W0717 01:24:46.779602   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:46.779607   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:46.779675   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:46.812357   64655 cri.go:89] found id: ""
	I0717 01:24:46.812384   64655 logs.go:276] 0 containers: []
	W0717 01:24:46.812395   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:46.812402   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:46.812467   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:46.846856   64655 cri.go:89] found id: ""
	I0717 01:24:46.846883   64655 logs.go:276] 0 containers: []
	W0717 01:24:46.846892   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:46.846900   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:46.846963   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:46.882803   64655 cri.go:89] found id: ""
	I0717 01:24:46.882829   64655 logs.go:276] 0 containers: []
	W0717 01:24:46.882840   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:46.882848   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:46.882912   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:46.916368   64655 cri.go:89] found id: ""
	I0717 01:24:46.916394   64655 logs.go:276] 0 containers: []
	W0717 01:24:46.916402   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:46.916407   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:46.916480   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:46.952022   64655 cri.go:89] found id: ""
	I0717 01:24:46.952050   64655 logs.go:276] 0 containers: []
	W0717 01:24:46.952058   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:46.952064   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:46.952122   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:46.989489   64655 cri.go:89] found id: ""
	I0717 01:24:46.989517   64655 logs.go:276] 0 containers: []
	W0717 01:24:46.989527   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:46.989539   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:46.989610   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:47.025681   64655 cri.go:89] found id: ""
	I0717 01:24:47.025709   64655 logs.go:276] 0 containers: []
	W0717 01:24:47.025719   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:47.025729   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:47.025745   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:47.107383   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:47.107418   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:47.146552   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:47.146582   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:47.200885   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:47.200919   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:47.215245   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:47.215270   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:47.282712   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:49.783882   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:49.796932   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:49.796998   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:49.829824   64655 cri.go:89] found id: ""
	I0717 01:24:49.829856   64655 logs.go:276] 0 containers: []
	W0717 01:24:49.829866   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:49.829874   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:49.829925   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:49.863205   64655 cri.go:89] found id: ""
	I0717 01:24:49.863229   64655 logs.go:276] 0 containers: []
	W0717 01:24:49.863237   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:49.863242   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:49.863293   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:49.898819   64655 cri.go:89] found id: ""
	I0717 01:24:49.898855   64655 logs.go:276] 0 containers: []
	W0717 01:24:49.898868   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:49.898875   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:49.898986   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:49.934668   64655 cri.go:89] found id: ""
	I0717 01:24:49.934695   64655 logs.go:276] 0 containers: []
	W0717 01:24:49.934703   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:49.934708   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:49.934756   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:49.974519   64655 cri.go:89] found id: ""
	I0717 01:24:49.974543   64655 logs.go:276] 0 containers: []
	W0717 01:24:49.974550   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:49.974556   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:49.974603   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:50.027036   64655 cri.go:89] found id: ""
	I0717 01:24:50.027060   64655 logs.go:276] 0 containers: []
	W0717 01:24:50.027066   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:50.027072   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:50.027130   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:50.063444   64655 cri.go:89] found id: ""
	I0717 01:24:50.063472   64655 logs.go:276] 0 containers: []
	W0717 01:24:50.063480   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:50.063487   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:50.063542   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:50.102305   64655 cri.go:89] found id: ""
	I0717 01:24:50.102336   64655 logs.go:276] 0 containers: []
	W0717 01:24:50.102346   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:50.102354   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:50.102372   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:50.156995   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:50.157026   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:50.170746   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:50.170776   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:50.240879   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:50.240901   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:50.240915   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:50.323680   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:50.323719   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:52.861195   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:52.874776   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:52.874845   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:52.914036   64655 cri.go:89] found id: ""
	I0717 01:24:52.914065   64655 logs.go:276] 0 containers: []
	W0717 01:24:52.914077   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:52.914084   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:52.914145   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:52.951851   64655 cri.go:89] found id: ""
	I0717 01:24:52.951880   64655 logs.go:276] 0 containers: []
	W0717 01:24:52.951890   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:52.951897   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:52.951967   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:52.985414   64655 cri.go:89] found id: ""
	I0717 01:24:52.985443   64655 logs.go:276] 0 containers: []
	W0717 01:24:52.985452   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:52.985459   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:52.985519   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:53.020227   64655 cri.go:89] found id: ""
	I0717 01:24:53.020251   64655 logs.go:276] 0 containers: []
	W0717 01:24:53.020261   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:53.020268   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:53.020325   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:53.057859   64655 cri.go:89] found id: ""
	I0717 01:24:53.057897   64655 logs.go:276] 0 containers: []
	W0717 01:24:53.057910   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:53.057919   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:53.057983   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:53.096821   64655 cri.go:89] found id: ""
	I0717 01:24:53.096853   64655 logs.go:276] 0 containers: []
	W0717 01:24:53.096863   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:53.096871   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:53.096943   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:53.128080   64655 cri.go:89] found id: ""
	I0717 01:24:53.128111   64655 logs.go:276] 0 containers: []
	W0717 01:24:53.128119   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:53.128125   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:53.128181   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:53.163900   64655 cri.go:89] found id: ""
	I0717 01:24:53.163930   64655 logs.go:276] 0 containers: []
	W0717 01:24:53.163941   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:53.163952   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:53.163966   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:53.216445   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:53.216479   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:53.229932   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:53.229958   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:53.296233   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:53.296255   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:53.296271   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:53.375265   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:53.375296   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:55.914295   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:55.930478   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:55.930538   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:56.003063   64655 cri.go:89] found id: ""
	I0717 01:24:56.003090   64655 logs.go:276] 0 containers: []
	W0717 01:24:56.003101   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:56.003109   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:56.003170   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:56.037638   64655 cri.go:89] found id: ""
	I0717 01:24:56.037666   64655 logs.go:276] 0 containers: []
	W0717 01:24:56.037676   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:56.037682   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:56.037737   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:56.072989   64655 cri.go:89] found id: ""
	I0717 01:24:56.073011   64655 logs.go:276] 0 containers: []
	W0717 01:24:56.073017   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:56.073023   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:56.073081   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:56.111630   64655 cri.go:89] found id: ""
	I0717 01:24:56.111661   64655 logs.go:276] 0 containers: []
	W0717 01:24:56.111669   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:56.111675   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:56.111722   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:56.153645   64655 cri.go:89] found id: ""
	I0717 01:24:56.153670   64655 logs.go:276] 0 containers: []
	W0717 01:24:56.153677   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:56.153682   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:56.153732   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:56.189650   64655 cri.go:89] found id: ""
	I0717 01:24:56.189674   64655 logs.go:276] 0 containers: []
	W0717 01:24:56.189681   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:56.189687   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:56.189736   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:56.223298   64655 cri.go:89] found id: ""
	I0717 01:24:56.223332   64655 logs.go:276] 0 containers: []
	W0717 01:24:56.223343   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:56.223351   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:56.223406   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:56.255290   64655 cri.go:89] found id: ""
	I0717 01:24:56.255318   64655 logs.go:276] 0 containers: []
	W0717 01:24:56.255329   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:56.255339   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:56.255353   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:24:56.295833   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:56.295865   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:56.349067   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:56.349105   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:56.362375   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:56.362404   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:56.428133   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:56.428151   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:56.428163   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:59.009165   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:24:59.021925   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:24:59.021990   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:24:59.054643   64655 cri.go:89] found id: ""
	I0717 01:24:59.054666   64655 logs.go:276] 0 containers: []
	W0717 01:24:59.054675   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:24:59.054681   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:24:59.054728   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:24:59.087778   64655 cri.go:89] found id: ""
	I0717 01:24:59.087808   64655 logs.go:276] 0 containers: []
	W0717 01:24:59.087818   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:24:59.087825   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:24:59.087886   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:24:59.121928   64655 cri.go:89] found id: ""
	I0717 01:24:59.121955   64655 logs.go:276] 0 containers: []
	W0717 01:24:59.121963   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:24:59.121968   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:24:59.122023   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:24:59.152808   64655 cri.go:89] found id: ""
	I0717 01:24:59.152835   64655 logs.go:276] 0 containers: []
	W0717 01:24:59.152849   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:24:59.152857   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:24:59.152914   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:24:59.185030   64655 cri.go:89] found id: ""
	I0717 01:24:59.185058   64655 logs.go:276] 0 containers: []
	W0717 01:24:59.185069   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:24:59.185076   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:24:59.185137   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:24:59.217812   64655 cri.go:89] found id: ""
	I0717 01:24:59.217840   64655 logs.go:276] 0 containers: []
	W0717 01:24:59.217850   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:24:59.217858   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:24:59.217915   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:24:59.253146   64655 cri.go:89] found id: ""
	I0717 01:24:59.253174   64655 logs.go:276] 0 containers: []
	W0717 01:24:59.253185   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:24:59.253202   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:24:59.253267   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:24:59.286083   64655 cri.go:89] found id: ""
	I0717 01:24:59.286106   64655 logs.go:276] 0 containers: []
	W0717 01:24:59.286114   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:24:59.286122   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:24:59.286131   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:24:59.338742   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:24:59.338770   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:24:59.351773   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:24:59.351800   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:24:59.412883   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:24:59.412905   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:24:59.412917   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:24:59.490911   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:24:59.490949   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:25:02.034497   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:25:02.047232   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:25:02.047290   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:25:02.079983   64655 cri.go:89] found id: ""
	I0717 01:25:02.080011   64655 logs.go:276] 0 containers: []
	W0717 01:25:02.080019   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:25:02.080025   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:25:02.080073   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:25:02.113726   64655 cri.go:89] found id: ""
	I0717 01:25:02.113749   64655 logs.go:276] 0 containers: []
	W0717 01:25:02.113760   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:25:02.113766   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:25:02.113812   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:25:02.147718   64655 cri.go:89] found id: ""
	I0717 01:25:02.147742   64655 logs.go:276] 0 containers: []
	W0717 01:25:02.147750   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:25:02.147755   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:25:02.147798   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:25:02.181619   64655 cri.go:89] found id: ""
	I0717 01:25:02.181641   64655 logs.go:276] 0 containers: []
	W0717 01:25:02.181648   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:25:02.181656   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:25:02.181705   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:25:02.214291   64655 cri.go:89] found id: ""
	I0717 01:25:02.214315   64655 logs.go:276] 0 containers: []
	W0717 01:25:02.214322   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:25:02.214328   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:25:02.214382   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:25:02.248227   64655 cri.go:89] found id: ""
	I0717 01:25:02.248253   64655 logs.go:276] 0 containers: []
	W0717 01:25:02.248261   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:25:02.248269   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:25:02.248315   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:25:02.281506   64655 cri.go:89] found id: ""
	I0717 01:25:02.281532   64655 logs.go:276] 0 containers: []
	W0717 01:25:02.281541   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:25:02.281548   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:25:02.281628   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:25:02.317374   64655 cri.go:89] found id: ""
	I0717 01:25:02.317404   64655 logs.go:276] 0 containers: []
	W0717 01:25:02.317414   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:25:02.317425   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:25:02.317437   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:25:02.330299   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:25:02.330324   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:25:02.394992   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:25:02.395021   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:25:02.395037   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:25:02.482222   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:25:02.482264   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:25:02.523771   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:25:02.523803   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:25:05.077800   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:25:05.093180   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:25:05.093245   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:25:05.129309   64655 cri.go:89] found id: ""
	I0717 01:25:05.129337   64655 logs.go:276] 0 containers: []
	W0717 01:25:05.129346   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:25:05.129352   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:25:05.129413   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:25:05.165439   64655 cri.go:89] found id: ""
	I0717 01:25:05.165467   64655 logs.go:276] 0 containers: []
	W0717 01:25:05.165478   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:25:05.165485   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:25:05.165546   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:25:05.200131   64655 cri.go:89] found id: ""
	I0717 01:25:05.200160   64655 logs.go:276] 0 containers: []
	W0717 01:25:05.200167   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:25:05.200173   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:25:05.200229   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:25:05.236439   64655 cri.go:89] found id: ""
	I0717 01:25:05.236464   64655 logs.go:276] 0 containers: []
	W0717 01:25:05.236472   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:25:05.236479   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:25:05.236535   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:25:05.270007   64655 cri.go:89] found id: ""
	I0717 01:25:05.270036   64655 logs.go:276] 0 containers: []
	W0717 01:25:05.270047   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:25:05.270055   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:25:05.270128   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:25:05.305644   64655 cri.go:89] found id: ""
	I0717 01:25:05.305675   64655 logs.go:276] 0 containers: []
	W0717 01:25:05.305686   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:25:05.305694   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:25:05.305765   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:25:05.341649   64655 cri.go:89] found id: ""
	I0717 01:25:05.341679   64655 logs.go:276] 0 containers: []
	W0717 01:25:05.341690   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:25:05.341698   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:25:05.341758   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:25:05.377526   64655 cri.go:89] found id: ""
	I0717 01:25:05.377556   64655 logs.go:276] 0 containers: []
	W0717 01:25:05.377564   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:25:05.377573   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:25:05.377585   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:25:05.414728   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:25:05.414754   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:25:05.463707   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:25:05.463741   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:25:05.477340   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:25:05.477370   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:25:05.540365   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:25:05.540384   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:25:05.540396   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:25:08.128211   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:25:08.141273   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:25:08.141350   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:25:08.173828   64655 cri.go:89] found id: ""
	I0717 01:25:08.173854   64655 logs.go:276] 0 containers: []
	W0717 01:25:08.173864   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:25:08.173872   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:25:08.173935   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:25:08.208349   64655 cri.go:89] found id: ""
	I0717 01:25:08.208382   64655 logs.go:276] 0 containers: []
	W0717 01:25:08.208390   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:25:08.208396   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:25:08.208449   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:25:08.245799   64655 cri.go:89] found id: ""
	I0717 01:25:08.245827   64655 logs.go:276] 0 containers: []
	W0717 01:25:08.245836   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:25:08.245845   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:25:08.245896   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:25:08.281110   64655 cri.go:89] found id: ""
	I0717 01:25:08.281139   64655 logs.go:276] 0 containers: []
	W0717 01:25:08.281149   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:25:08.281155   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:25:08.281211   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:25:08.313793   64655 cri.go:89] found id: ""
	I0717 01:25:08.313822   64655 logs.go:276] 0 containers: []
	W0717 01:25:08.313830   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:25:08.313837   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:25:08.313889   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:25:08.352634   64655 cri.go:89] found id: ""
	I0717 01:25:08.352657   64655 logs.go:276] 0 containers: []
	W0717 01:25:08.352668   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:25:08.352675   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:25:08.352737   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:25:08.387059   64655 cri.go:89] found id: ""
	I0717 01:25:08.387091   64655 logs.go:276] 0 containers: []
	W0717 01:25:08.387107   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:25:08.387115   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:25:08.387177   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:25:08.426200   64655 cri.go:89] found id: ""
	I0717 01:25:08.426228   64655 logs.go:276] 0 containers: []
	W0717 01:25:08.426235   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:25:08.426243   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:25:08.426255   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:25:08.501440   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:25:08.501465   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:25:08.501479   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:25:08.578198   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:25:08.578247   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:25:08.624476   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:25:08.624526   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:25:08.674535   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:25:08.674567   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:25:11.190120   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:25:11.203962   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:25:11.204032   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:25:11.240473   64655 cri.go:89] found id: ""
	I0717 01:25:11.240500   64655 logs.go:276] 0 containers: []
	W0717 01:25:11.240510   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:25:11.240516   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:25:11.240588   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:25:11.275873   64655 cri.go:89] found id: ""
	I0717 01:25:11.275898   64655 logs.go:276] 0 containers: []
	W0717 01:25:11.275905   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:25:11.275911   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:25:11.275970   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:25:11.311701   64655 cri.go:89] found id: ""
	I0717 01:25:11.311728   64655 logs.go:276] 0 containers: []
	W0717 01:25:11.311736   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:25:11.311742   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:25:11.311788   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:25:11.345474   64655 cri.go:89] found id: ""
	I0717 01:25:11.345503   64655 logs.go:276] 0 containers: []
	W0717 01:25:11.345513   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:25:11.345521   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:25:11.345578   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:25:11.380131   64655 cri.go:89] found id: ""
	I0717 01:25:11.380169   64655 logs.go:276] 0 containers: []
	W0717 01:25:11.380180   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:25:11.380187   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:25:11.380241   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:25:11.414259   64655 cri.go:89] found id: ""
	I0717 01:25:11.414288   64655 logs.go:276] 0 containers: []
	W0717 01:25:11.414297   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:25:11.414303   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:25:11.414350   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:25:11.449655   64655 cri.go:89] found id: ""
	I0717 01:25:11.449676   64655 logs.go:276] 0 containers: []
	W0717 01:25:11.449684   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:25:11.449689   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:25:11.449735   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:25:11.485138   64655 cri.go:89] found id: ""
	I0717 01:25:11.485175   64655 logs.go:276] 0 containers: []
	W0717 01:25:11.485188   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:25:11.485200   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:25:11.485214   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:25:11.538268   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:25:11.538301   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:25:11.551904   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:25:11.551928   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:25:11.619677   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:25:11.619699   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:25:11.619714   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:25:11.693750   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:25:11.693784   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:25:14.232914   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:25:14.245956   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:25:14.246021   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:25:14.281486   64655 cri.go:89] found id: ""
	I0717 01:25:14.281509   64655 logs.go:276] 0 containers: []
	W0717 01:25:14.281516   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:25:14.281522   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:25:14.281579   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:25:14.315564   64655 cri.go:89] found id: ""
	I0717 01:25:14.315593   64655 logs.go:276] 0 containers: []
	W0717 01:25:14.315601   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:25:14.315607   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:25:14.315668   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:25:14.348889   64655 cri.go:89] found id: ""
	I0717 01:25:14.348919   64655 logs.go:276] 0 containers: []
	W0717 01:25:14.348931   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:25:14.348938   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:25:14.349008   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:25:14.383544   64655 cri.go:89] found id: ""
	I0717 01:25:14.383576   64655 logs.go:276] 0 containers: []
	W0717 01:25:14.383584   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:25:14.383590   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:25:14.383660   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:25:14.415717   64655 cri.go:89] found id: ""
	I0717 01:25:14.415742   64655 logs.go:276] 0 containers: []
	W0717 01:25:14.415750   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:25:14.415756   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:25:14.415804   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:25:14.447722   64655 cri.go:89] found id: ""
	I0717 01:25:14.447751   64655 logs.go:276] 0 containers: []
	W0717 01:25:14.447760   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:25:14.447766   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:25:14.447817   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:25:14.483701   64655 cri.go:89] found id: ""
	I0717 01:25:14.483728   64655 logs.go:276] 0 containers: []
	W0717 01:25:14.483735   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:25:14.483740   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:25:14.483794   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:25:14.516511   64655 cri.go:89] found id: ""
	I0717 01:25:14.516542   64655 logs.go:276] 0 containers: []
	W0717 01:25:14.516552   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:25:14.516573   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:25:14.516589   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:25:14.568771   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:25:14.568803   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:25:14.582716   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:25:14.582742   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:25:14.645658   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:25:14.645679   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:25:14.645694   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:25:14.719288   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:25:14.719321   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:25:17.255988   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:25:17.269168   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:25:17.269248   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:25:17.306480   64655 cri.go:89] found id: ""
	I0717 01:25:17.306509   64655 logs.go:276] 0 containers: []
	W0717 01:25:17.306519   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:25:17.306525   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:25:17.306574   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:25:17.341947   64655 cri.go:89] found id: ""
	I0717 01:25:17.341980   64655 logs.go:276] 0 containers: []
	W0717 01:25:17.341989   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:25:17.341996   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:25:17.342051   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:25:17.376097   64655 cri.go:89] found id: ""
	I0717 01:25:17.376121   64655 logs.go:276] 0 containers: []
	W0717 01:25:17.376128   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:25:17.376134   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:25:17.376188   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:25:17.409854   64655 cri.go:89] found id: ""
	I0717 01:25:17.409883   64655 logs.go:276] 0 containers: []
	W0717 01:25:17.409898   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:25:17.409904   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:25:17.409951   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:25:17.443830   64655 cri.go:89] found id: ""
	I0717 01:25:17.443857   64655 logs.go:276] 0 containers: []
	W0717 01:25:17.443865   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:25:17.443871   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:25:17.443920   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:25:17.477434   64655 cri.go:89] found id: ""
	I0717 01:25:17.477459   64655 logs.go:276] 0 containers: []
	W0717 01:25:17.477466   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:25:17.477473   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:25:17.477528   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:25:17.512088   64655 cri.go:89] found id: ""
	I0717 01:25:17.512121   64655 logs.go:276] 0 containers: []
	W0717 01:25:17.512132   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:25:17.512139   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:25:17.512193   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:25:17.545227   64655 cri.go:89] found id: ""
	I0717 01:25:17.545253   64655 logs.go:276] 0 containers: []
	W0717 01:25:17.545261   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:25:17.545269   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:25:17.545280   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:25:17.601705   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:25:17.601741   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:25:17.615590   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:25:17.615614   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:25:17.677139   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:25:17.677166   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:25:17.677181   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:25:17.759874   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:25:17.759917   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:25:20.303534   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:25:20.316279   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:25:20.316365   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:25:20.355183   64655 cri.go:89] found id: ""
	I0717 01:25:20.355219   64655 logs.go:276] 0 containers: []
	W0717 01:25:20.355232   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:25:20.355241   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:25:20.355303   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:25:20.392407   64655 cri.go:89] found id: ""
	I0717 01:25:20.392444   64655 logs.go:276] 0 containers: []
	W0717 01:25:20.392457   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:25:20.392465   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:25:20.392531   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:25:20.428047   64655 cri.go:89] found id: ""
	I0717 01:25:20.428076   64655 logs.go:276] 0 containers: []
	W0717 01:25:20.428087   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:25:20.428094   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:25:20.428156   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:25:20.462584   64655 cri.go:89] found id: ""
	I0717 01:25:20.462612   64655 logs.go:276] 0 containers: []
	W0717 01:25:20.462620   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:25:20.462627   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:25:20.462685   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:25:20.509258   64655 cri.go:89] found id: ""
	I0717 01:25:20.509281   64655 logs.go:276] 0 containers: []
	W0717 01:25:20.509289   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:25:20.509296   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:25:20.509346   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:25:20.546190   64655 cri.go:89] found id: ""
	I0717 01:25:20.546211   64655 logs.go:276] 0 containers: []
	W0717 01:25:20.546218   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:25:20.546225   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:25:20.546272   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:25:20.579074   64655 cri.go:89] found id: ""
	I0717 01:25:20.579106   64655 logs.go:276] 0 containers: []
	W0717 01:25:20.579116   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:25:20.579125   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:25:20.579200   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:25:20.613241   64655 cri.go:89] found id: ""
	I0717 01:25:20.613264   64655 logs.go:276] 0 containers: []
	W0717 01:25:20.613273   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:25:20.613282   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:25:20.613294   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:25:20.668566   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:25:20.668606   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:25:20.682688   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:25:20.682714   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:25:20.757232   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:25:20.757261   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:25:20.757275   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:25:20.833138   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:25:20.833181   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:25:23.371257   64655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:25:23.384489   64655 kubeadm.go:597] duration metric: took 4m2.246361221s to restartPrimaryControlPlane
	W0717 01:25:23.384590   64655 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 01:25:23.384625   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:25:24.041788   64655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:25:24.055694   64655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:25:24.065475   64655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:25:24.074976   64655 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:25:24.074993   64655 kubeadm.go:157] found existing configuration files:
	
	I0717 01:25:24.075034   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:25:24.083884   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:25:24.083943   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:25:24.093256   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:25:24.102024   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:25:24.102070   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:25:24.111313   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:25:24.119879   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:25:24.119915   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:25:24.128792   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:25:24.137070   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:25:24.137115   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:25:24.146042   64655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:25:24.219166   64655 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:25:24.219227   64655 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:25:24.370045   64655 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:25:24.370172   64655 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:25:24.370273   64655 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:25:24.552529   64655 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:25:24.554472   64655 out.go:204]   - Generating certificates and keys ...
	I0717 01:25:24.554540   64655 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:25:24.554608   64655 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:25:24.554685   64655 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:25:24.554762   64655 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:25:24.554850   64655 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:25:24.554922   64655 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:25:24.555098   64655 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:25:24.555704   64655 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:25:24.556451   64655 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:25:24.557125   64655 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:25:24.557344   64655 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:25:24.557503   64655 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:25:24.716721   64655 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:25:24.938854   64655 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:25:24.994500   64655 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:25:25.136504   64655 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:25:25.151544   64655 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:25:25.153767   64655 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:25:25.153901   64655 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:25:25.298076   64655 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:25:25.300306   64655 out.go:204]   - Booting up control plane ...
	I0717 01:25:25.300452   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:25:25.308230   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:25:25.309221   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:25:25.309939   64655 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:25:25.311966   64655 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:26:05.313343   64655 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:26:05.313617   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:26:05.313811   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:26:10.314141   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:26:10.314335   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:26:20.314999   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:26:20.315233   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:26:40.316280   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:26:40.316508   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:27:20.318752   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:27:20.319023   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:27:20.319034   64655 kubeadm.go:310] 
	I0717 01:27:20.319082   64655 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:27:20.319172   64655 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:27:20.319193   64655 kubeadm.go:310] 
	I0717 01:27:20.319242   64655 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:27:20.319288   64655 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:27:20.319420   64655 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:27:20.319432   64655 kubeadm.go:310] 
	I0717 01:27:20.319553   64655 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:27:20.319610   64655 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:27:20.319652   64655 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:27:20.319662   64655 kubeadm.go:310] 
	I0717 01:27:20.319803   64655 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:27:20.319934   64655 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:27:20.319952   64655 kubeadm.go:310] 
	I0717 01:27:20.320111   64655 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:27:20.320240   64655 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:27:20.320341   64655 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:27:20.320449   64655 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:27:20.320458   64655 kubeadm.go:310] 
	I0717 01:27:20.321105   64655 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:27:20.321204   64655 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:27:20.321281   64655 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0717 01:27:20.321432   64655 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 01:27:20.321490   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:27:20.788697   64655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:27:20.803309   64655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:27:20.813699   64655 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:27:20.813724   64655 kubeadm.go:157] found existing configuration files:
	
	I0717 01:27:20.813780   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:27:20.823138   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:27:20.823208   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:27:20.832700   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:27:20.842158   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:27:20.842220   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:27:20.851928   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:27:20.861150   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:27:20.861221   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:27:20.870782   64655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:27:20.879763   64655 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:27:20.879815   64655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:27:20.889043   64655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:27:21.096020   64655 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:29:17.120217   64655 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:29:17.120324   64655 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 01:29:17.122004   64655 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:29:17.122074   64655 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:29:17.122162   64655 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:29:17.122282   64655 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:29:17.122404   64655 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:29:17.122483   64655 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:29:17.124214   64655 out.go:204]   - Generating certificates and keys ...
	I0717 01:29:17.124279   64655 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:29:17.124338   64655 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:29:17.124407   64655 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:29:17.124491   64655 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:29:17.124610   64655 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:29:17.124677   64655 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:29:17.124743   64655 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:29:17.124791   64655 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:29:17.124858   64655 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:29:17.124945   64655 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:29:17.125015   64655 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:29:17.125090   64655 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:29:17.125161   64655 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:29:17.125207   64655 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:29:17.125260   64655 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:29:17.125328   64655 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:29:17.125487   64655 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:29:17.125610   64655 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:29:17.125674   64655 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:29:17.125800   64655 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:29:17.127076   64655 out.go:204]   - Booting up control plane ...
	I0717 01:29:17.127167   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:29:17.127254   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:29:17.127335   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:29:17.127426   64655 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:29:17.127553   64655 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:29:17.127592   64655 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:29:17.127645   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.127791   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.127843   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.128024   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.128133   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.128292   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.128355   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.128498   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.128569   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.128729   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.128737   64655 kubeadm.go:310] 
	I0717 01:29:17.128767   64655 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:29:17.128800   64655 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:29:17.128805   64655 kubeadm.go:310] 
	I0717 01:29:17.128849   64655 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:29:17.128908   64655 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:29:17.129063   64655 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:29:17.129074   64655 kubeadm.go:310] 
	I0717 01:29:17.129204   64655 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:29:17.129235   64655 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:29:17.129274   64655 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:29:17.129288   64655 kubeadm.go:310] 
	I0717 01:29:17.129388   64655 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:29:17.129469   64655 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:29:17.129482   64655 kubeadm.go:310] 
	I0717 01:29:17.129625   64655 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:29:17.129754   64655 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:29:17.129861   64655 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:29:17.129946   64655 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:29:17.130043   64655 kubeadm.go:394] duration metric: took 7m56.056343168s to StartCluster
	I0717 01:29:17.130056   64655 kubeadm.go:310] 
	I0717 01:29:17.130084   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:29:17.130150   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:29:17.170463   64655 cri.go:89] found id: ""
	I0717 01:29:17.170486   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.170496   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:29:17.170502   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:29:17.170553   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:29:17.204996   64655 cri.go:89] found id: ""
	I0717 01:29:17.205021   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.205028   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:29:17.205034   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:29:17.205087   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:29:17.239200   64655 cri.go:89] found id: ""
	I0717 01:29:17.239232   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.239241   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:29:17.239248   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:29:17.239298   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:29:17.274065   64655 cri.go:89] found id: ""
	I0717 01:29:17.274096   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.274104   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:29:17.274112   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:29:17.274170   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:29:17.312132   64655 cri.go:89] found id: ""
	I0717 01:29:17.312161   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.312172   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:29:17.312181   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:29:17.312254   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:29:17.347520   64655 cri.go:89] found id: ""
	I0717 01:29:17.347559   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.347569   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:29:17.347580   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:29:17.347638   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:29:17.386989   64655 cri.go:89] found id: ""
	I0717 01:29:17.387021   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.387032   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:29:17.387040   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:29:17.387103   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:29:17.421790   64655 cri.go:89] found id: ""
	I0717 01:29:17.421815   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.421822   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:29:17.421831   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:29:17.421843   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:29:17.473599   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:29:17.473628   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:29:17.488496   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:29:17.488530   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:29:17.566512   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:29:17.566541   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:29:17.566559   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:29:17.677372   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:29:17.677409   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 01:29:17.725383   64655 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 01:29:17.725434   64655 out.go:239] * 
	* 
	W0717 01:29:17.725496   64655 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:29:17.725529   64655 out.go:239] * 
	* 
	W0717 01:29:17.726376   64655 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:29:17.729540   64655 out.go:177] 
	W0717 01:29:17.730940   64655 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:29:17.730995   64655 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 01:29:17.731022   64655 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 01:29:17.732408   64655 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-249342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 2 (243.28853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-249342 logs -n 25
E0717 01:29:18.739165   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-249342        | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-621535                              | stopped-upgrade-621535       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-729236                           | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-729236                           | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-621535                              | stopped-upgrade-621535       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:19 UTC |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729236                           | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-249342             | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:22 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-484167            | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC | 17 Jul 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-945694  | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC | 17 Jul 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-484167                 | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-945694       | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC |                     |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:28 UTC |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:28:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:28:42.248168   67712 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:28:42.248455   67712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:28:42.248466   67712 out.go:304] Setting ErrFile to fd 2...
	I0717 01:28:42.248472   67712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:28:42.248704   67712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:28:42.249305   67712 out.go:298] Setting JSON to false
	I0717 01:28:42.250385   67712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7871,"bootTime":1721171851,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:28:42.250452   67712 start.go:139] virtualization: kvm guest
	I0717 01:28:42.252706   67712 out.go:177] * [no-preload-818382] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:28:42.254164   67712 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:28:42.254244   67712 notify.go:220] Checking for updates...
	I0717 01:28:42.256843   67712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:28:42.258246   67712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:28:42.259485   67712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:28:42.260590   67712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:28:42.261676   67712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:28:42.263229   67712 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:28:42.263338   67712 config.go:182] Loaded profile config "embed-certs-484167": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:28:42.263454   67712 config.go:182] Loaded profile config "old-k8s-version-249342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:28:42.263548   67712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:28:42.302386   67712 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:28:42.303727   67712 start.go:297] selected driver: kvm2
	I0717 01:28:42.303748   67712 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:28:42.303764   67712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:28:42.304787   67712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.304890   67712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:28:42.321923   67712 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:28:42.321984   67712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:28:42.322246   67712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:28:42.322326   67712 cni.go:84] Creating CNI manager for ""
	I0717 01:28:42.322343   67712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:28:42.322371   67712 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 01:28:42.322452   67712 start.go:340] cluster config:
	{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:28:42.322569   67712 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.324425   67712 out.go:177] * Starting "no-preload-818382" primary control-plane node in "no-preload-818382" cluster
	I0717 01:28:38.500517   66178 main.go:141] libmachine: (embed-certs-484167) Waiting to get IP...
	I0717 01:28:38.501490   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:38.501960   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:38.502046   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:38.501949   67513 retry.go:31] will retry after 281.899795ms: waiting for machine to come up
	I0717 01:28:38.785423   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:38.785907   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:38.786042   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:38.785969   67513 retry.go:31] will retry after 363.164065ms: waiting for machine to come up
	I0717 01:28:39.150565   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:39.151407   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:39.151442   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:39.151311   67513 retry.go:31] will retry after 411.174019ms: waiting for machine to come up
	I0717 01:28:39.563772   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:39.564409   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:39.564434   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:39.564368   67513 retry.go:31] will retry after 436.415408ms: waiting for machine to come up
	I0717 01:28:40.002824   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:40.003439   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:40.003485   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:40.003382   67513 retry.go:31] will retry after 759.682144ms: waiting for machine to come up
	I0717 01:28:40.764598   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:40.765174   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:40.765211   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:40.765141   67513 retry.go:31] will retry after 920.0523ms: waiting for machine to come up
	I0717 01:28:41.983414   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:41.983845   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:41.983880   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:41.983803   67513 retry.go:31] will retry after 1.13168383s: waiting for machine to come up
	I0717 01:28:40.298557   66659 crio.go:462] duration metric: took 1.520522297s to copy over tarball
	I0717 01:28:40.298639   66659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:28:42.773928   66659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.475232069s)
	I0717 01:28:42.773972   66659 crio.go:469] duration metric: took 2.475380506s to extract the tarball
	I0717 01:28:42.773982   66659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:28:42.812525   66659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:28:42.869621   66659 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:28:42.869646   66659 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:28:42.869657   66659 kubeadm.go:934] updating node { 192.168.50.30 8444 v1.30.2 crio true true} ...
	I0717 01:28:42.869792   66659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-945694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-945694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:28:42.869886   66659 ssh_runner.go:195] Run: crio config
	I0717 01:28:42.940405   66659 cni.go:84] Creating CNI manager for ""
	I0717 01:28:42.940426   66659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:28:42.940439   66659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:28:42.940462   66659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-945694 NodeName:default-k8s-diff-port-945694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:28:42.940618   66659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-945694"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:28:42.940715   66659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:28:42.951654   66659 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:28:42.951722   66659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:28:42.961555   66659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0717 01:28:42.978248   66659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:28:42.994670   66659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0717 01:28:43.011285   66659 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0717 01:28:43.015053   66659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:28:43.027712   66659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:28:43.165004   66659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:28:43.183106   66659 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694 for IP: 192.168.50.30
	I0717 01:28:43.183127   66659 certs.go:194] generating shared ca certs ...
	I0717 01:28:43.183145   66659 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:28:43.183314   66659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:28:43.183367   66659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:28:43.183383   66659 certs.go:256] generating profile certs ...
	I0717 01:28:43.183513   66659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.key
	I0717 01:28:43.183591   66659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/apiserver.key.b4f4c923
	I0717 01:28:43.183654   66659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/proxy-client.key
	I0717 01:28:43.183805   66659 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:28:43.183852   66659 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:28:43.183865   66659 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:28:43.183893   66659 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:28:43.183935   66659 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:28:43.183968   66659 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:28:43.184031   66659 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:28:43.184689   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:28:43.214355   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:28:43.240670   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:28:43.273518   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:28:43.305189   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 01:28:43.327996   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:28:43.350974   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:28:43.374842   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:28:43.398191   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:28:43.420908   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:28:43.445843   66659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:28:43.477260   66659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:28:43.494985   66659 ssh_runner.go:195] Run: openssl version
	I0717 01:28:43.501235   66659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:28:43.514872   66659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:28:43.521065   66659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:28:43.521133   66659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:28:43.527803   66659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:28:43.540586   66659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:28:43.554084   66659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:28:43.559157   66659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:28:43.559220   66659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:28:43.565665   66659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:28:43.576951   66659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:28:43.589551   66659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:28:43.594184   66659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:28:43.594232   66659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:28:43.600014   66659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:28:43.612796   66659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:28:43.618119   66659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:28:43.628945   66659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:28:43.636025   66659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:28:43.642462   66659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:28:43.649296   66659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:28:43.655738   66659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:28:43.662213   66659 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-945694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-945694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:28:43.662331   66659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:28:43.662395   66659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:28:43.729456   66659 cri.go:89] found id: ""
	I0717 01:28:43.729545   66659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:28:43.749711   66659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:28:43.749735   66659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:28:43.749784   66659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:28:43.762421   66659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:28:43.763890   66659 kubeconfig.go:125] found "default-k8s-diff-port-945694" server: "https://192.168.50.30:8444"
	I0717 01:28:43.766760   66659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:28:43.786785   66659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.30
	I0717 01:28:43.786820   66659 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:28:43.786833   66659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:28:43.786906   66659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:28:43.841101   66659 cri.go:89] found id: ""
	I0717 01:28:43.841179   66659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:28:43.862378   66659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:28:43.889903   66659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:28:43.889928   66659 kubeadm.go:157] found existing configuration files:
	
	I0717 01:28:43.889985   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:28:43.902232   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:28:43.902296   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:28:43.915011   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:28:43.926878   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:28:43.926946   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:28:43.940619   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:28:43.953868   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:28:43.953931   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:28:43.967509   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:28:43.980164   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:28:43.980223   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:28:43.992629   66659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:28:44.005264   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:28:44.153842   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:28:42.325701   67712 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:28:42.325826   67712 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:28:42.325858   67712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json: {Name:mk1bbea8aa59252841063fb1026bc6f10e1a5465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:28:42.325950   67712 cache.go:107] acquiring lock: {Name:mk0dda4d4cdd92722b746ab931e6544cfc8daee5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.326013   67712 cache.go:107] acquiring lock: {Name:mkddaaee919763be73bfba0c581555b8cc97a67b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.326000   67712 cache.go:107] acquiring lock: {Name:mkf2f11535addf893c2faa84c376231e8d922e64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.326067   67712 cache.go:107] acquiring lock: {Name:mk0f717937d10c133c40dfa3d731090d6e186c8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.326074   67712 cache.go:107] acquiring lock: {Name:mk2ca5e82f37242a4f02d1776db6559bdb43421e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.326124   67712 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:28:42.326149   67712 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:28:42.326187   67712 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:28:42.326172   67712 cache.go:107] acquiring lock: {Name:mkecaf352dd381368806d2a149fd31f0c349a680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.325958   67712 cache.go:107] acquiring lock: {Name:mk1de3a52aa61e3b4e847379240ac3935bedb199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.326286   67712 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:28:42.326293   67712 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:28:42.326281   67712 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 01:28:42.326426   67712 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 471.542µs
	I0717 01:28:42.326091   67712 cache.go:107] acquiring lock: {Name:mkf6e5b69e84ed3f384772a188b9364b7e3d5b5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:28:42.326447   67712 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 01:28:42.326504   67712 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:28:42.326531   67712 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:28:42.326659   67712 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:28:42.327430   67712 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:28:42.327488   67712 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:28:42.327571   67712 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:28:42.327739   67712 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:28:42.327778   67712 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:28:42.327845   67712 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:28:42.328211   67712 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:28:42.482712   67712 cache.go:162] opening:  /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:28:42.493616   67712 cache.go:162] opening:  /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:28:42.495608   67712 cache.go:162] opening:  /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:28:42.518418   67712 cache.go:162] opening:  /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:28:42.520383   67712 cache.go:162] opening:  /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:28:42.532247   67712 cache.go:162] opening:  /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0717 01:28:42.587204   67712 cache.go:162] opening:  /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:28:42.601126   67712 cache.go:157] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0717 01:28:42.601171   67712 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 275.198656ms
	I0717 01:28:42.601184   67712 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0717 01:28:42.931199   67712 cache.go:157] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0717 01:28:42.931232   67712 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 605.063225ms
	I0717 01:28:42.931247   67712 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0717 01:28:43.653263   67712 cache.go:157] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0717 01:28:43.653287   67712 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.327276816s
	I0717 01:28:43.653297   67712 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0717 01:28:43.952271   67712 cache.go:157] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0717 01:28:43.952300   67712 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 1.626363817s
	I0717 01:28:43.952312   67712 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0717 01:28:44.037047   67712 cache.go:157] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0717 01:28:44.037082   67712 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 1.711129837s
	I0717 01:28:44.037096   67712 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0717 01:28:44.108291   67712 cache.go:157] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0717 01:28:44.108322   67712 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 1.782288975s
	I0717 01:28:44.108337   67712 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0717 01:28:44.520328   67712 cache.go:157] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 exists
	I0717 01:28:44.520355   67712 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0" took 2.194340003s
	I0717 01:28:44.520366   67712 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0717 01:28:44.520381   67712 cache.go:87] Successfully saved all images to host disk.
	I0717 01:28:43.117412   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:43.117881   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:43.117907   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:43.117840   67513 retry.go:31] will retry after 1.173242694s: waiting for machine to come up
	I0717 01:28:44.292188   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:44.292697   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:44.292727   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:44.292672   67513 retry.go:31] will retry after 1.657422209s: waiting for machine to come up
	I0717 01:28:45.951987   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:45.952395   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:45.952443   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:45.952374   67513 retry.go:31] will retry after 1.776109017s: waiting for machine to come up
	I0717 01:28:44.990801   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:28:45.192606   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:28:45.278247   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:28:45.377743   66659 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:28:45.377845   66659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:28:45.878595   66659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:28:46.378895   66659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:28:46.397247   66659 api_server.go:72] duration metric: took 1.019502837s to wait for apiserver process to appear ...
	I0717 01:28:46.397277   66659 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:28:46.397317   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:28:46.397833   66659 api_server.go:269] stopped: https://192.168.50.30:8444/healthz: Get "https://192.168.50.30:8444/healthz": dial tcp 192.168.50.30:8444: connect: connection refused
	I0717 01:28:46.898063   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:28:49.385181   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:28:49.385224   66659 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:28:49.385254   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:28:49.411505   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:28:49.411540   66659 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:28:49.411556   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:28:49.433407   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:28:49.433436   66659 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:28:49.898076   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:28:49.903555   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:28:49.903580   66659 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:28:50.397873   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:28:50.402548   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:28:50.402583   66659 api_server.go:103] status: https://192.168.50.30:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:28:50.897688   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:28:50.901964   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0717 01:28:50.909553   66659 api_server.go:141] control plane version: v1.30.2
	I0717 01:28:50.909578   66659 api_server.go:131] duration metric: took 4.512294813s to wait for apiserver health ...
	I0717 01:28:50.909586   66659 cni.go:84] Creating CNI manager for ""
	I0717 01:28:50.909592   66659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:28:50.911145   66659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:28:47.730632   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:47.731166   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:47.731197   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:47.731137   67513 retry.go:31] will retry after 2.055902269s: waiting for machine to come up
	I0717 01:28:49.788196   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:49.788733   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:49.788767   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:49.788675   67513 retry.go:31] will retry after 2.679944782s: waiting for machine to come up
	I0717 01:28:52.470485   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:52.470913   66178 main.go:141] libmachine: (embed-certs-484167) DBG | unable to find current IP address of domain embed-certs-484167 in network mk-embed-certs-484167
	I0717 01:28:52.470938   66178 main.go:141] libmachine: (embed-certs-484167) DBG | I0717 01:28:52.470880   67513 retry.go:31] will retry after 3.320198282s: waiting for machine to come up
	I0717 01:28:50.912316   66659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:28:50.925982   66659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:28:50.945437   66659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:28:50.963357   66659 system_pods.go:59] 8 kube-system pods found
	I0717 01:28:50.963396   66659 system_pods.go:61] "coredns-7db6d8ff4d-vwv7z" [eac09e0f-5803-4237-97f3-39255efd7792] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:28:50.963407   66659 system_pods.go:61] "etcd-default-k8s-diff-port-945694" [8e405826-1e21-464d-9948-1974c506bca7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:28:50.963416   66659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-945694" [74be70d2-dca5-465e-be95-12301db15741] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:28:50.963429   66659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-945694" [42f668a8-cf5a-4756-9ff1-aca1061c3c69] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:28:50.963439   66659 system_pods.go:61] "kube-proxy-7vv55" [ab59794c-e929-4761-a128-cf69e0198c00] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:28:50.963450   66659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-945694" [55c78b5a-87b1-49bc-bc74-220bdd63ad2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:28:50.963462   66659 system_pods.go:61] "metrics-server-569cc877fc-wmss9" [b38f90e5-5dee-4fb1-b197-c2ccf9d91bbc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:28:50.963471   66659 system_pods.go:61] "storage-provisioner" [8df634e1-f051-458a-b99f-c25f1c4196db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:28:50.963480   66659 system_pods.go:74] duration metric: took 18.0193ms to wait for pod list to return data ...
	I0717 01:28:50.963492   66659 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:28:50.969550   66659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:28:50.969572   66659 node_conditions.go:123] node cpu capacity is 2
	I0717 01:28:50.969583   66659 node_conditions.go:105] duration metric: took 6.08555ms to run NodePressure ...
	I0717 01:28:50.969599   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:28:51.258228   66659 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:28:51.262319   66659 kubeadm.go:739] kubelet initialised
	I0717 01:28:51.262336   66659 kubeadm.go:740] duration metric: took 4.075381ms waiting for restarted kubelet to initialise ...
	I0717 01:28:51.262350   66659 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:28:51.267588   66659 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vwv7z" in "kube-system" namespace to be "Ready" ...
	I0717 01:28:53.274277   66659 pod_ready.go:102] pod "coredns-7db6d8ff4d-vwv7z" in "kube-system" namespace has status "Ready":"False"
	I0717 01:28:57.074731   67712 start.go:364] duration metric: took 14.748025908s to acquireMachinesLock for "no-preload-818382"
	I0717 01:28:57.074801   67712 start.go:93] Provisioning new machine with config: &{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:28:57.074915   67712 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:28:57.076545   67712 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 01:28:57.076732   67712 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:28:57.076769   67712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:28:57.096403   67712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I0717 01:28:57.096865   67712 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:28:57.097511   67712 main.go:141] libmachine: Using API Version  1
	I0717 01:28:57.097536   67712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:28:57.097942   67712 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:28:57.098156   67712 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:28:57.098326   67712 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:28:57.098487   67712 start.go:159] libmachine.API.Create for "no-preload-818382" (driver="kvm2")
	I0717 01:28:57.098510   67712 client.go:168] LocalClient.Create starting
	I0717 01:28:57.098545   67712 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 01:28:57.098574   67712 main.go:141] libmachine: Decoding PEM data...
	I0717 01:28:57.098588   67712 main.go:141] libmachine: Parsing certificate...
	I0717 01:28:57.098673   67712 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 01:28:57.098700   67712 main.go:141] libmachine: Decoding PEM data...
	I0717 01:28:57.098716   67712 main.go:141] libmachine: Parsing certificate...
	I0717 01:28:57.098741   67712 main.go:141] libmachine: Running pre-create checks...
	I0717 01:28:57.098754   67712 main.go:141] libmachine: (no-preload-818382) Calling .PreCreateCheck
	I0717 01:28:57.099113   67712 main.go:141] libmachine: (no-preload-818382) Calling .GetConfigRaw
	I0717 01:28:57.099498   67712 main.go:141] libmachine: Creating machine...
	I0717 01:28:57.099513   67712 main.go:141] libmachine: (no-preload-818382) Calling .Create
	I0717 01:28:57.099622   67712 main.go:141] libmachine: (no-preload-818382) Creating KVM machine...
	I0717 01:28:57.100790   67712 main.go:141] libmachine: (no-preload-818382) DBG | found existing default KVM network
	I0717 01:28:57.102244   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:57.102094   67801 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f800}
	I0717 01:28:57.102262   67712 main.go:141] libmachine: (no-preload-818382) DBG | created network xml: 
	I0717 01:28:57.102272   67712 main.go:141] libmachine: (no-preload-818382) DBG | <network>
	I0717 01:28:57.102277   67712 main.go:141] libmachine: (no-preload-818382) DBG |   <name>mk-no-preload-818382</name>
	I0717 01:28:57.102283   67712 main.go:141] libmachine: (no-preload-818382) DBG |   <dns enable='no'/>
	I0717 01:28:57.102288   67712 main.go:141] libmachine: (no-preload-818382) DBG |   
	I0717 01:28:57.102294   67712 main.go:141] libmachine: (no-preload-818382) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 01:28:57.102299   67712 main.go:141] libmachine: (no-preload-818382) DBG |     <dhcp>
	I0717 01:28:57.102306   67712 main.go:141] libmachine: (no-preload-818382) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 01:28:57.102313   67712 main.go:141] libmachine: (no-preload-818382) DBG |     </dhcp>
	I0717 01:28:57.102321   67712 main.go:141] libmachine: (no-preload-818382) DBG |   </ip>
	I0717 01:28:57.102331   67712 main.go:141] libmachine: (no-preload-818382) DBG |   
	I0717 01:28:57.102339   67712 main.go:141] libmachine: (no-preload-818382) DBG | </network>
	I0717 01:28:57.102353   67712 main.go:141] libmachine: (no-preload-818382) DBG | 
	I0717 01:28:57.107418   67712 main.go:141] libmachine: (no-preload-818382) DBG | trying to create private KVM network mk-no-preload-818382 192.168.39.0/24...
	I0717 01:28:57.176057   67712 main.go:141] libmachine: (no-preload-818382) DBG | private KVM network mk-no-preload-818382 192.168.39.0/24 created
	I0717 01:28:57.176090   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:57.176053   67801 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:28:57.176105   67712 main.go:141] libmachine: (no-preload-818382) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382 ...
	I0717 01:28:57.176124   67712 main.go:141] libmachine: (no-preload-818382) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 01:28:57.176220   67712 main.go:141] libmachine: (no-preload-818382) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 01:28:55.795252   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:55.795692   66178 main.go:141] libmachine: (embed-certs-484167) Found IP for machine: 192.168.72.48
	I0717 01:28:55.795714   66178 main.go:141] libmachine: (embed-certs-484167) Reserving static IP address...
	I0717 01:28:55.795725   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has current primary IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:55.796136   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "embed-certs-484167", mac: "52:54:00:cf:68:c9", ip: "192.168.72.48"} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:55.796156   66178 main.go:141] libmachine: (embed-certs-484167) Reserved static IP address: 192.168.72.48
	I0717 01:28:55.796169   66178 main.go:141] libmachine: (embed-certs-484167) DBG | skip adding static IP to network mk-embed-certs-484167 - found existing host DHCP lease matching {name: "embed-certs-484167", mac: "52:54:00:cf:68:c9", ip: "192.168.72.48"}
	I0717 01:28:55.796182   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Getting to WaitForSSH function...
	I0717 01:28:55.796193   66178 main.go:141] libmachine: (embed-certs-484167) Waiting for SSH to be available...
	I0717 01:28:55.798220   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:55.798557   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:55.798600   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:55.798746   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Using SSH client type: external
	I0717 01:28:55.798770   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa (-rw-------)
	I0717 01:28:55.798791   66178 main.go:141] libmachine: (embed-certs-484167) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:28:55.798806   66178 main.go:141] libmachine: (embed-certs-484167) DBG | About to run SSH command:
	I0717 01:28:55.798815   66178 main.go:141] libmachine: (embed-certs-484167) DBG | exit 0
	I0717 01:28:55.928515   66178 main.go:141] libmachine: (embed-certs-484167) DBG | SSH cmd err, output: <nil>: 
	I0717 01:28:55.929004   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetConfigRaw
	I0717 01:28:55.929624   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetIP
	I0717 01:28:55.932147   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:55.932457   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:55.932486   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:55.932742   66178 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/config.json ...
	I0717 01:28:55.933077   66178 machine.go:94] provisionDockerMachine start ...
	I0717 01:28:55.933102   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:28:55.933296   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:55.935771   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:55.936108   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:55.936136   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:55.936250   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:55.936423   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:55.936585   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:55.936708   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:55.936872   66178 main.go:141] libmachine: Using SSH client type: native
	I0717 01:28:55.937076   66178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0717 01:28:55.937091   66178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:28:56.048835   66178 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:28:56.048861   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetMachineName
	I0717 01:28:56.049126   66178 buildroot.go:166] provisioning hostname "embed-certs-484167"
	I0717 01:28:56.049149   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetMachineName
	I0717 01:28:56.049324   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:56.052023   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.052353   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:56.052376   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.052477   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:56.052659   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.052837   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.052987   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:56.053137   66178 main.go:141] libmachine: Using SSH client type: native
	I0717 01:28:56.053302   66178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0717 01:28:56.053318   66178 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-484167 && echo "embed-certs-484167" | sudo tee /etc/hostname
	I0717 01:28:56.179463   66178 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-484167
	
	I0717 01:28:56.179496   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:56.182737   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.183089   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:56.183121   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.183268   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:56.183437   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.183594   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.183759   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:56.183907   66178 main.go:141] libmachine: Using SSH client type: native
	I0717 01:28:56.184069   66178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0717 01:28:56.184084   66178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-484167' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-484167/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-484167' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:28:56.305864   66178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:28:56.305901   66178 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:28:56.305924   66178 buildroot.go:174] setting up certificates
	I0717 01:28:56.305958   66178 provision.go:84] configureAuth start
	I0717 01:28:56.305973   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetMachineName
	I0717 01:28:56.306403   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetIP
	I0717 01:28:56.309254   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.309671   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:56.309697   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.309861   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:56.312486   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.312748   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:56.312774   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.312902   66178 provision.go:143] copyHostCerts
	I0717 01:28:56.312972   66178 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:28:56.312989   66178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:28:56.313056   66178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:28:56.313163   66178 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:28:56.313173   66178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:28:56.313204   66178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:28:56.313276   66178 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:28:56.313285   66178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:28:56.313310   66178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:28:56.313385   66178 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-484167 san=[127.0.0.1 192.168.72.48 embed-certs-484167 localhost minikube]
	I0717 01:28:56.384918   66178 provision.go:177] copyRemoteCerts
	I0717 01:28:56.384991   66178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:28:56.385020   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:56.387753   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.388076   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:56.388102   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.388266   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:56.388440   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.388608   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:56.388733   66178 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa Username:docker}
	I0717 01:28:56.476759   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:28:56.502282   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:28:56.525780   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:28:56.549366   66178 provision.go:87] duration metric: took 243.39664ms to configureAuth
	I0717 01:28:56.549391   66178 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:28:56.549571   66178 config.go:182] Loaded profile config "embed-certs-484167": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:28:56.549651   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:56.551897   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.552217   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:56.552248   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.552361   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:56.552515   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.552699   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.552817   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:56.552998   66178 main.go:141] libmachine: Using SSH client type: native
	I0717 01:28:56.553157   66178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0717 01:28:56.553173   66178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:28:56.826472   66178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:28:56.826501   66178 machine.go:97] duration metric: took 893.407058ms to provisionDockerMachine
	I0717 01:28:56.826514   66178 start.go:293] postStartSetup for "embed-certs-484167" (driver="kvm2")
	I0717 01:28:56.826527   66178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:28:56.826548   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:28:56.826854   66178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:28:56.826881   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:56.829688   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.830007   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:56.830036   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.830156   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:56.830351   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.830524   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:56.830793   66178 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa Username:docker}
	I0717 01:28:56.919565   66178 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:28:56.923782   66178 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:28:56.923812   66178 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:28:56.923879   66178 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:28:56.923950   66178 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:28:56.924036   66178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:28:56.933775   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:28:56.956545   66178 start.go:296] duration metric: took 130.019276ms for postStartSetup
	I0717 01:28:56.956597   66178 fix.go:56] duration metric: took 19.810808453s for fixHost
	I0717 01:28:56.956620   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:56.959013   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.959376   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:56.959396   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:56.959525   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:56.959717   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.959883   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:56.960031   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:56.960163   66178 main.go:141] libmachine: Using SSH client type: native
	I0717 01:28:56.960312   66178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0717 01:28:56.960322   66178 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:28:57.074562   66178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721179737.048288457
	
	I0717 01:28:57.074586   66178 fix.go:216] guest clock: 1721179737.048288457
	I0717 01:28:57.074595   66178 fix.go:229] Guest: 2024-07-17 01:28:57.048288457 +0000 UTC Remote: 2024-07-17 01:28:56.956601827 +0000 UTC m=+324.357230399 (delta=91.68663ms)
	I0717 01:28:57.074629   66178 fix.go:200] guest clock delta is within tolerance: 91.68663ms
	I0717 01:28:57.074636   66178 start.go:83] releasing machines lock for "embed-certs-484167", held for 19.928880768s
	I0717 01:28:57.074666   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:28:57.074981   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetIP
	I0717 01:28:57.078071   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:57.078534   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:57.078565   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:57.078873   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:28:57.079390   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:28:57.079574   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:28:57.079667   66178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:28:57.079711   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:57.079944   66178 ssh_runner.go:195] Run: cat /version.json
	I0717 01:28:57.079971   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:28:57.082298   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:57.082475   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:57.082645   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:57.082676   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:57.082793   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:57.082895   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:57.082923   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:57.082958   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:57.083072   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:28:57.083136   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:57.083232   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:28:57.083279   66178 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa Username:docker}
	I0717 01:28:57.083354   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:28:57.083497   66178 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa Username:docker}
	I0717 01:28:57.170088   66178 ssh_runner.go:195] Run: systemctl --version
	I0717 01:28:57.203678   66178 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:28:57.354523   66178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:28:57.361033   66178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:28:57.361095   66178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:28:57.377930   66178 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:28:57.377954   66178 start.go:495] detecting cgroup driver to use...
	I0717 01:28:57.378007   66178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:28:57.396489   66178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:28:57.412042   66178 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:28:57.412118   66178 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:28:57.426796   66178 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:28:57.440622   66178 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:28:57.565505   66178 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:28:57.719483   66178 docker.go:233] disabling docker service ...
	I0717 01:28:57.719557   66178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:28:57.734255   66178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:28:57.746634   66178 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:28:57.875205   66178 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:28:57.995496   66178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:28:58.013898   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:28:58.032842   66178 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:28:58.032901   66178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:28:58.043488   66178 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:28:58.043559   66178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:28:58.053950   66178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:28:58.064231   66178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:28:58.074601   66178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:28:58.085473   66178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:28:58.096167   66178 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:28:58.113767   66178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:28:58.128739   66178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:28:58.147185   66178 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:28:58.147246   66178 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:28:58.161774   66178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:28:58.174076   66178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:28:58.302565   66178 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:28:58.449514   66178 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:28:58.449580   66178 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:28:58.454272   66178 start.go:563] Will wait 60s for crictl version
	I0717 01:28:58.454320   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:28:58.458412   66178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:28:58.508726   66178 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:28:58.508827   66178 ssh_runner.go:195] Run: crio --version
	I0717 01:28:58.542807   66178 ssh_runner.go:195] Run: crio --version
	I0717 01:28:58.575979   66178 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:28:55.774320   66659 pod_ready.go:102] pod "coredns-7db6d8ff4d-vwv7z" in "kube-system" namespace has status "Ready":"False"
	I0717 01:28:57.775243   66659 pod_ready.go:102] pod "coredns-7db6d8ff4d-vwv7z" in "kube-system" namespace has status "Ready":"False"
	I0717 01:28:57.424469   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:57.424335   67801 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa...
	I0717 01:28:57.495473   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:57.495340   67801 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/no-preload-818382.rawdisk...
	I0717 01:28:57.495512   67712 main.go:141] libmachine: (no-preload-818382) DBG | Writing magic tar header
	I0717 01:28:57.495530   67712 main.go:141] libmachine: (no-preload-818382) DBG | Writing SSH key tar header
	I0717 01:28:57.495540   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:57.495475   67801 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382 ...
	I0717 01:28:57.495626   67712 main.go:141] libmachine: (no-preload-818382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382
	I0717 01:28:57.495654   67712 main.go:141] libmachine: (no-preload-818382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 01:28:57.495669   67712 main.go:141] libmachine: (no-preload-818382) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382 (perms=drwx------)
	I0717 01:28:57.495681   67712 main.go:141] libmachine: (no-preload-818382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:28:57.495693   67712 main.go:141] libmachine: (no-preload-818382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 01:28:57.495702   67712 main.go:141] libmachine: (no-preload-818382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:28:57.495717   67712 main.go:141] libmachine: (no-preload-818382) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:28:57.495730   67712 main.go:141] libmachine: (no-preload-818382) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:28:57.495740   67712 main.go:141] libmachine: (no-preload-818382) DBG | Checking permissions on dir: /home
	I0717 01:28:57.495754   67712 main.go:141] libmachine: (no-preload-818382) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 01:28:57.495766   67712 main.go:141] libmachine: (no-preload-818382) DBG | Skipping /home - not owner
	I0717 01:28:57.495785   67712 main.go:141] libmachine: (no-preload-818382) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 01:28:57.495795   67712 main.go:141] libmachine: (no-preload-818382) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:28:57.495805   67712 main.go:141] libmachine: (no-preload-818382) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:28:57.495815   67712 main.go:141] libmachine: (no-preload-818382) Creating domain...
	I0717 01:28:57.497103   67712 main.go:141] libmachine: (no-preload-818382) define libvirt domain using xml: 
	I0717 01:28:57.497129   67712 main.go:141] libmachine: (no-preload-818382) <domain type='kvm'>
	I0717 01:28:57.497174   67712 main.go:141] libmachine: (no-preload-818382)   <name>no-preload-818382</name>
	I0717 01:28:57.497199   67712 main.go:141] libmachine: (no-preload-818382)   <memory unit='MiB'>2200</memory>
	I0717 01:28:57.497212   67712 main.go:141] libmachine: (no-preload-818382)   <vcpu>2</vcpu>
	I0717 01:28:57.497223   67712 main.go:141] libmachine: (no-preload-818382)   <features>
	I0717 01:28:57.497234   67712 main.go:141] libmachine: (no-preload-818382)     <acpi/>
	I0717 01:28:57.497243   67712 main.go:141] libmachine: (no-preload-818382)     <apic/>
	I0717 01:28:57.497252   67712 main.go:141] libmachine: (no-preload-818382)     <pae/>
	I0717 01:28:57.497262   67712 main.go:141] libmachine: (no-preload-818382)     
	I0717 01:28:57.497289   67712 main.go:141] libmachine: (no-preload-818382)   </features>
	I0717 01:28:57.497314   67712 main.go:141] libmachine: (no-preload-818382)   <cpu mode='host-passthrough'>
	I0717 01:28:57.497333   67712 main.go:141] libmachine: (no-preload-818382)   
	I0717 01:28:57.497353   67712 main.go:141] libmachine: (no-preload-818382)   </cpu>
	I0717 01:28:57.497365   67712 main.go:141] libmachine: (no-preload-818382)   <os>
	I0717 01:28:57.497375   67712 main.go:141] libmachine: (no-preload-818382)     <type>hvm</type>
	I0717 01:28:57.497387   67712 main.go:141] libmachine: (no-preload-818382)     <boot dev='cdrom'/>
	I0717 01:28:57.497395   67712 main.go:141] libmachine: (no-preload-818382)     <boot dev='hd'/>
	I0717 01:28:57.497405   67712 main.go:141] libmachine: (no-preload-818382)     <bootmenu enable='no'/>
	I0717 01:28:57.497415   67712 main.go:141] libmachine: (no-preload-818382)   </os>
	I0717 01:28:57.497438   67712 main.go:141] libmachine: (no-preload-818382)   <devices>
	I0717 01:28:57.497465   67712 main.go:141] libmachine: (no-preload-818382)     <disk type='file' device='cdrom'>
	I0717 01:28:57.497482   67712 main.go:141] libmachine: (no-preload-818382)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/boot2docker.iso'/>
	I0717 01:28:57.497492   67712 main.go:141] libmachine: (no-preload-818382)       <target dev='hdc' bus='scsi'/>
	I0717 01:28:57.497503   67712 main.go:141] libmachine: (no-preload-818382)       <readonly/>
	I0717 01:28:57.497513   67712 main.go:141] libmachine: (no-preload-818382)     </disk>
	I0717 01:28:57.497523   67712 main.go:141] libmachine: (no-preload-818382)     <disk type='file' device='disk'>
	I0717 01:28:57.497533   67712 main.go:141] libmachine: (no-preload-818382)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:28:57.497554   67712 main.go:141] libmachine: (no-preload-818382)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/no-preload-818382.rawdisk'/>
	I0717 01:28:57.497568   67712 main.go:141] libmachine: (no-preload-818382)       <target dev='hda' bus='virtio'/>
	I0717 01:28:57.497582   67712 main.go:141] libmachine: (no-preload-818382)     </disk>
	I0717 01:28:57.497593   67712 main.go:141] libmachine: (no-preload-818382)     <interface type='network'>
	I0717 01:28:57.497604   67712 main.go:141] libmachine: (no-preload-818382)       <source network='mk-no-preload-818382'/>
	I0717 01:28:57.497615   67712 main.go:141] libmachine: (no-preload-818382)       <model type='virtio'/>
	I0717 01:28:57.497624   67712 main.go:141] libmachine: (no-preload-818382)     </interface>
	I0717 01:28:57.497634   67712 main.go:141] libmachine: (no-preload-818382)     <interface type='network'>
	I0717 01:28:57.497648   67712 main.go:141] libmachine: (no-preload-818382)       <source network='default'/>
	I0717 01:28:57.497659   67712 main.go:141] libmachine: (no-preload-818382)       <model type='virtio'/>
	I0717 01:28:57.497672   67712 main.go:141] libmachine: (no-preload-818382)     </interface>
	I0717 01:28:57.497682   67712 main.go:141] libmachine: (no-preload-818382)     <serial type='pty'>
	I0717 01:28:57.497692   67712 main.go:141] libmachine: (no-preload-818382)       <target port='0'/>
	I0717 01:28:57.497712   67712 main.go:141] libmachine: (no-preload-818382)     </serial>
	I0717 01:28:57.497720   67712 main.go:141] libmachine: (no-preload-818382)     <console type='pty'>
	I0717 01:28:57.497728   67712 main.go:141] libmachine: (no-preload-818382)       <target type='serial' port='0'/>
	I0717 01:28:57.497738   67712 main.go:141] libmachine: (no-preload-818382)     </console>
	I0717 01:28:57.497745   67712 main.go:141] libmachine: (no-preload-818382)     <rng model='virtio'>
	I0717 01:28:57.497759   67712 main.go:141] libmachine: (no-preload-818382)       <backend model='random'>/dev/random</backend>
	I0717 01:28:57.497769   67712 main.go:141] libmachine: (no-preload-818382)     </rng>
	I0717 01:28:57.497795   67712 main.go:141] libmachine: (no-preload-818382)     
	I0717 01:28:57.497805   67712 main.go:141] libmachine: (no-preload-818382)     
	I0717 01:28:57.497815   67712 main.go:141] libmachine: (no-preload-818382)   </devices>
	I0717 01:28:57.497824   67712 main.go:141] libmachine: (no-preload-818382) </domain>
	I0717 01:28:57.497837   67712 main.go:141] libmachine: (no-preload-818382) 
	I0717 01:28:57.502067   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:12:13:07 in network default
	I0717 01:28:57.502556   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:28:57.502593   67712 main.go:141] libmachine: (no-preload-818382) Ensuring networks are active...
	I0717 01:28:57.503316   67712 main.go:141] libmachine: (no-preload-818382) Ensuring network default is active
	I0717 01:28:57.503682   67712 main.go:141] libmachine: (no-preload-818382) Ensuring network mk-no-preload-818382 is active
	I0717 01:28:57.504202   67712 main.go:141] libmachine: (no-preload-818382) Getting domain xml...
	I0717 01:28:57.504953   67712 main.go:141] libmachine: (no-preload-818382) Creating domain...
	I0717 01:28:58.771552   67712 main.go:141] libmachine: (no-preload-818382) Waiting to get IP...
	I0717 01:28:58.773237   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:28:58.773792   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:28:58.773838   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:58.773773   67801 retry.go:31] will retry after 242.783451ms: waiting for machine to come up
	I0717 01:28:59.018398   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:28:59.018934   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:28:59.018961   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:59.018899   67801 retry.go:31] will retry after 360.29212ms: waiting for machine to come up
	I0717 01:28:59.380490   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:28:59.381057   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:28:59.381078   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:59.381024   67801 retry.go:31] will retry after 335.263024ms: waiting for machine to come up
	I0717 01:28:59.718227   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:28:59.718757   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:28:59.718785   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:28:59.718713   67801 retry.go:31] will retry after 445.383285ms: waiting for machine to come up
	I0717 01:29:00.165377   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:00.165851   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:00.165906   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:00.165832   67801 retry.go:31] will retry after 748.057068ms: waiting for machine to come up
	I0717 01:29:00.915724   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:00.916455   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:00.916487   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:00.916407   67801 retry.go:31] will retry after 891.609942ms: waiting for machine to come up
	I0717 01:29:01.809680   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:01.810243   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:01.810281   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:01.810166   67801 retry.go:31] will retry after 1.079567063s: waiting for machine to come up
	I0717 01:28:58.577193   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetIP
	I0717 01:28:58.580362   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:58.580840   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:28:58.580890   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:28:58.581058   66178 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:28:58.585374   66178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:28:58.598623   66178 kubeadm.go:883] updating cluster {Name:embed-certs-484167 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-484167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:28:58.598729   66178 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:28:58.598782   66178 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:28:58.647794   66178 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:28:58.647855   66178 ssh_runner.go:195] Run: which lz4
	I0717 01:28:58.652400   66178 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:28:58.657077   66178 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:28:58.657105   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:29:00.153765   66178 crio.go:462] duration metric: took 1.501384324s to copy over tarball
	I0717 01:29:00.153862   66178 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:29:02.456608   66178 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.302713113s)
	I0717 01:29:02.456639   66178 crio.go:469] duration metric: took 2.302836758s to extract the tarball
	I0717 01:29:02.456648   66178 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:29:02.496532   66178 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:29:02.541794   66178 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:29:02.541821   66178 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:29:02.541830   66178 kubeadm.go:934] updating node { 192.168.72.48 8443 v1.30.2 crio true true} ...
	I0717 01:29:02.541975   66178 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-484167 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-484167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:29:02.542063   66178 ssh_runner.go:195] Run: crio config
	I0717 01:29:02.614505   66178 cni.go:84] Creating CNI manager for ""
	I0717 01:29:02.614530   66178 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:29:02.614543   66178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:29:02.614562   66178 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.48 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-484167 NodeName:embed-certs-484167 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:29:02.614730   66178 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-484167"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:29:02.614793   66178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:29:02.625705   66178 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:29:02.625777   66178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:28:59.775266   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-vwv7z" in "kube-system" namespace has status "Ready":"True"
	I0717 01:28:59.775294   66659 pod_ready.go:81] duration metric: took 8.507685346s for pod "coredns-7db6d8ff4d-vwv7z" in "kube-system" namespace to be "Ready" ...
	I0717 01:28:59.775309   66659 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:28:59.781619   66659 pod_ready.go:92] pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:28:59.781640   66659 pod_ready.go:81] duration metric: took 6.324537ms for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:28:59.781649   66659 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:28:59.787262   66659 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:28:59.787287   66659 pod_ready.go:81] duration metric: took 5.630352ms for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:28:59.787300   66659 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:01.795108   66659 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"False"
	I0717 01:29:02.794364   66659 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:29:02.794389   66659 pod_ready.go:81] duration metric: took 3.007080016s for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:02.794402   66659 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7vv55" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:02.799837   66659 pod_ready.go:92] pod "kube-proxy-7vv55" in "kube-system" namespace has status "Ready":"True"
	I0717 01:29:02.799853   66659 pod_ready.go:81] duration metric: took 5.443485ms for pod "kube-proxy-7vv55" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:02.799861   66659 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:02.804225   66659 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:29:02.804246   66659 pod_ready.go:81] duration metric: took 4.378754ms for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:02.804259   66659 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:02.891818   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:02.892335   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:02.892361   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:02.892295   67801 retry.go:31] will retry after 1.182444844s: waiting for machine to come up
	I0717 01:29:04.076953   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:04.077446   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:04.077479   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:04.077398   67801 retry.go:31] will retry after 1.206166912s: waiting for machine to come up
	I0717 01:29:05.285899   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:05.286482   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:05.286510   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:05.286434   67801 retry.go:31] will retry after 1.402365458s: waiting for machine to come up
	I0717 01:29:06.691088   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:06.691614   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:06.691638   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:06.691566   67801 retry.go:31] will retry after 2.770461053s: waiting for machine to come up
	I0717 01:29:02.636195   66178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 01:29:02.655755   66178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:29:02.674042   66178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 01:29:02.694803   66178 ssh_runner.go:195] Run: grep 192.168.72.48	control-plane.minikube.internal$ /etc/hosts
	I0717 01:29:02.700165   66178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:29:02.715840   66178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:29:02.854061   66178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:29:02.871512   66178 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167 for IP: 192.168.72.48
	I0717 01:29:02.871536   66178 certs.go:194] generating shared ca certs ...
	I0717 01:29:02.871557   66178 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:29:02.871734   66178 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:29:02.871786   66178 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:29:02.871798   66178 certs.go:256] generating profile certs ...
	I0717 01:29:02.871911   66178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/client.key
	I0717 01:29:02.871996   66178 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/apiserver.key.7f7147cc
	I0717 01:29:02.872073   66178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/proxy-client.key
	I0717 01:29:02.872218   66178 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:29:02.872260   66178 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:29:02.872272   66178 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:29:02.872299   66178 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:29:02.872329   66178 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:29:02.872357   66178 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:29:02.872404   66178 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:29:02.873144   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:29:02.913775   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:29:02.946564   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:29:02.972974   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:29:02.997868   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:29:03.023563   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:29:03.048760   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:29:03.074957   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:29:03.100339   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:29:03.124094   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:29:03.147175   66178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:29:03.173776   66178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:29:03.191775   66178 ssh_runner.go:195] Run: openssl version
	I0717 01:29:03.197494   66178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:29:03.207782   66178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:29:03.212137   66178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:29:03.212199   66178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:29:03.218141   66178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:29:03.228795   66178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:29:03.239353   66178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:29:03.243763   66178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:29:03.243816   66178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:29:03.249599   66178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:29:03.260651   66178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:29:03.271262   66178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:29:03.276044   66178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:29:03.276118   66178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:29:03.281766   66178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:29:03.292254   66178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:29:03.296618   66178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:29:03.302388   66178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:29:03.308361   66178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:29:03.314410   66178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:29:03.320211   66178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:29:03.326115   66178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:29:03.331927   66178 kubeadm.go:392] StartCluster: {Name:embed-certs-484167 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-484167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:29:03.332003   66178 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:29:03.332037   66178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:29:03.369233   66178 cri.go:89] found id: ""
	I0717 01:29:03.369303   66178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:29:03.384218   66178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:29:03.384238   66178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:29:03.384301   66178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:29:03.396793   66178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:29:03.397741   66178 kubeconfig.go:125] found "embed-certs-484167" server: "https://192.168.72.48:8443"
	I0717 01:29:03.399504   66178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:29:03.409403   66178 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.48
	I0717 01:29:03.409440   66178 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:29:03.409453   66178 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:29:03.409530   66178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:29:03.457014   66178 cri.go:89] found id: ""
	I0717 01:29:03.457081   66178 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:29:03.475138   66178 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:29:03.485330   66178 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:29:03.485351   66178 kubeadm.go:157] found existing configuration files:
	
	I0717 01:29:03.485406   66178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:29:03.494272   66178 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:29:03.494330   66178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:29:03.503439   66178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:29:03.512319   66178 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:29:03.512373   66178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:29:03.522379   66178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:29:03.533025   66178 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:29:03.533078   66178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:29:03.542360   66178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:29:03.551187   66178 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:29:03.551270   66178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:29:03.560527   66178 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:29:03.570119   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:29:03.689386   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:29:04.531369   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:29:04.748943   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:29:04.849096   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:29:04.992470   66178 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:29:04.992579   66178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:29:05.492771   66178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:29:05.993396   66178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:29:06.013399   66178 api_server.go:72] duration metric: took 1.020929291s to wait for apiserver process to appear ...
	I0717 01:29:06.013428   66178 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:29:06.013447   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:06.014089   66178 api_server.go:269] stopped: https://192.168.72.48:8443/healthz: Get "https://192.168.72.48:8443/healthz": dial tcp 192.168.72.48:8443: connect: connection refused
	I0717 01:29:06.513790   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:04.811472   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:29:06.812269   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:29:09.310528   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:29:09.465135   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:09.465644   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:09.465668   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:09.465595   67801 retry.go:31] will retry after 2.887318805s: waiting for machine to come up
	I0717 01:29:09.155422   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:29:09.155473   66178 api_server.go:103] status: https://192.168.72.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:29:09.155497   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:09.166714   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:29:09.166744   66178 api_server.go:103] status: https://192.168.72.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:29:09.514347   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:09.518688   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:29:09.518717   66178 api_server.go:103] status: https://192.168.72.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:29:10.014332   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:10.022407   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:29:10.022438   66178 api_server.go:103] status: https://192.168.72.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:29:10.513921   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:10.519656   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:29:10.519693   66178 api_server.go:103] status: https://192.168.72.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:29:11.014234   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:11.018845   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:29:11.018870   66178 api_server.go:103] status: https://192.168.72.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:29:11.514482   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:11.518725   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:29:11.518766   66178 api_server.go:103] status: https://192.168.72.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:29:12.014400   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:12.019788   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:29:12.019820   66178 api_server.go:103] status: https://192.168.72.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:29:12.514519   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:29:12.518777   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 200:
	ok
	I0717 01:29:12.524888   66178 api_server.go:141] control plane version: v1.30.2
	I0717 01:29:12.524915   66178 api_server.go:131] duration metric: took 6.511480045s to wait for apiserver health ...
	I0717 01:29:12.524924   66178 cni.go:84] Creating CNI manager for ""
	I0717 01:29:12.524930   66178 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:29:12.526571   66178 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:29:12.527660   66178 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:29:12.540716   66178 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:29:12.560284   66178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:29:12.573472   66178 system_pods.go:59] 8 kube-system pods found
	I0717 01:29:12.573521   66178 system_pods.go:61] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:29:12.573532   66178 system_pods.go:61] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:29:12.573542   66178 system_pods.go:61] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:29:12.573550   66178 system_pods.go:61] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:29:12.573560   66178 system_pods.go:61] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:29:12.573567   66178 system_pods.go:61] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:29:12.573581   66178 system_pods.go:61] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:29:12.573590   66178 system_pods.go:61] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:29:12.573599   66178 system_pods.go:74] duration metric: took 13.297661ms to wait for pod list to return data ...
	I0717 01:29:12.573611   66178 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:29:12.579124   66178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:29:12.579149   66178 node_conditions.go:123] node cpu capacity is 2
	I0717 01:29:12.579165   66178 node_conditions.go:105] duration metric: took 5.547151ms to run NodePressure ...
	I0717 01:29:12.579179   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:29:12.848463   66178 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:29:12.852661   66178 kubeadm.go:739] kubelet initialised
	I0717 01:29:12.852681   66178 kubeadm.go:740] duration metric: took 4.193778ms waiting for restarted kubelet to initialise ...
	I0717 01:29:12.852688   66178 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:29:12.858817   66178 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z4qpz" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:12.864087   66178 pod_ready.go:97] node "embed-certs-484167" hosting pod "coredns-7db6d8ff4d-z4qpz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:12.864111   66178 pod_ready.go:81] duration metric: took 5.270716ms for pod "coredns-7db6d8ff4d-z4qpz" in "kube-system" namespace to be "Ready" ...
	E0717 01:29:12.864120   66178 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-484167" hosting pod "coredns-7db6d8ff4d-z4qpz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:12.864128   66178 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-484167" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:12.868018   66178 pod_ready.go:97] node "embed-certs-484167" hosting pod "etcd-embed-certs-484167" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:12.868043   66178 pod_ready.go:81] duration metric: took 3.907256ms for pod "etcd-embed-certs-484167" in "kube-system" namespace to be "Ready" ...
	E0717 01:29:12.868051   66178 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-484167" hosting pod "etcd-embed-certs-484167" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:12.868057   66178 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-484167" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:12.872012   66178 pod_ready.go:97] node "embed-certs-484167" hosting pod "kube-apiserver-embed-certs-484167" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:12.872044   66178 pod_ready.go:81] duration metric: took 3.972018ms for pod "kube-apiserver-embed-certs-484167" in "kube-system" namespace to be "Ready" ...
	E0717 01:29:12.872053   66178 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-484167" hosting pod "kube-apiserver-embed-certs-484167" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:12.872059   66178 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-484167" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:12.963994   66178 pod_ready.go:97] node "embed-certs-484167" hosting pod "kube-controller-manager-embed-certs-484167" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:12.964021   66178 pod_ready.go:81] duration metric: took 91.954615ms for pod "kube-controller-manager-embed-certs-484167" in "kube-system" namespace to be "Ready" ...
	E0717 01:29:12.964030   66178 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-484167" hosting pod "kube-controller-manager-embed-certs-484167" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:12.964039   66178 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gq7qg" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:13.365483   66178 pod_ready.go:97] node "embed-certs-484167" hosting pod "kube-proxy-gq7qg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:13.365510   66178 pod_ready.go:81] duration metric: took 401.461881ms for pod "kube-proxy-gq7qg" in "kube-system" namespace to be "Ready" ...
	E0717 01:29:13.365519   66178 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-484167" hosting pod "kube-proxy-gq7qg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:13.365526   66178 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-484167" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:13.763486   66178 pod_ready.go:97] node "embed-certs-484167" hosting pod "kube-scheduler-embed-certs-484167" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:13.763514   66178 pod_ready.go:81] duration metric: took 397.9814ms for pod "kube-scheduler-embed-certs-484167" in "kube-system" namespace to be "Ready" ...
	E0717 01:29:13.763525   66178 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-484167" hosting pod "kube-scheduler-embed-certs-484167" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:13.763534   66178 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace to be "Ready" ...
	I0717 01:29:14.164489   66178 pod_ready.go:97] node "embed-certs-484167" hosting pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:14.164514   66178 pod_ready.go:81] duration metric: took 400.970811ms for pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace to be "Ready" ...
	E0717 01:29:14.164522   66178 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-484167" hosting pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:14.164530   66178 pod_ready.go:38] duration metric: took 1.31183424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:29:14.164545   66178 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:29:14.177850   66178 ops.go:34] apiserver oom_adj: -16
	I0717 01:29:14.177875   66178 kubeadm.go:597] duration metric: took 10.793631106s to restartPrimaryControlPlane
	I0717 01:29:14.177884   66178 kubeadm.go:394] duration metric: took 10.845964719s to StartCluster
	I0717 01:29:14.177901   66178 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:29:14.177989   66178 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:29:14.179474   66178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:29:14.192796   66178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:29:14.192778   66178 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:29:14.192896   66178 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-484167"
	I0717 01:29:14.192992   66178 config.go:182] Loaded profile config "embed-certs-484167": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:29:14.193010   66178 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-484167"
	W0717 01:29:14.193022   66178 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:29:14.193063   66178 host.go:66] Checking if "embed-certs-484167" exists ...
	I0717 01:29:14.192936   66178 addons.go:69] Setting default-storageclass=true in profile "embed-certs-484167"
	I0717 01:29:14.193160   66178 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-484167"
	I0717 01:29:14.192952   66178 addons.go:69] Setting metrics-server=true in profile "embed-certs-484167"
	I0717 01:29:14.193240   66178 addons.go:234] Setting addon metrics-server=true in "embed-certs-484167"
	W0717 01:29:14.193256   66178 addons.go:243] addon metrics-server should already be in state true
	I0717 01:29:14.193288   66178 host.go:66] Checking if "embed-certs-484167" exists ...
	I0717 01:29:14.193474   66178 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:29:14.193504   66178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:29:14.193521   66178 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:29:14.193551   66178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:29:14.193608   66178 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:29:14.193650   66178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:29:14.208836   66178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35711
	I0717 01:29:14.208879   66178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33273
	I0717 01:29:14.209022   66178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33153
	I0717 01:29:14.209298   66178 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:29:14.209391   66178 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:29:14.209439   66178 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:29:14.209836   66178 main.go:141] libmachine: Using API Version  1
	I0717 01:29:14.209849   66178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:29:14.209854   66178 main.go:141] libmachine: Using API Version  1
	I0717 01:29:14.209875   66178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:29:14.209946   66178 main.go:141] libmachine: Using API Version  1
	I0717 01:29:14.209956   66178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:29:14.210393   66178 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:29:14.210399   66178 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:29:14.210591   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetState
	I0717 01:29:14.210642   66178 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:29:14.210958   66178 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:29:14.210997   66178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:29:14.211418   66178 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:29:14.211435   66178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:29:14.226488   66178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36641
	I0717 01:29:14.226838   66178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I0717 01:29:14.226920   66178 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:29:14.227195   66178 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:29:14.227425   66178 main.go:141] libmachine: Using API Version  1
	I0717 01:29:14.227449   66178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:29:14.227702   66178 main.go:141] libmachine: Using API Version  1
	I0717 01:29:14.227717   66178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:29:14.227743   66178 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:29:14.227918   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetState
	I0717 01:29:14.228073   66178 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:29:14.228265   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetState
	I0717 01:29:14.229625   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:29:14.229889   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:29:14.231183   66178 out.go:177] * Verifying Kubernetes components...
	I0717 01:29:14.233066   66178 addons.go:234] Setting addon default-storageclass=true in "embed-certs-484167"
	W0717 01:29:14.240672   66178 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:29:14.240705   66178 host.go:66] Checking if "embed-certs-484167" exists ...
	I0717 01:29:14.241060   66178 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:29:14.241103   66178 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:29:14.241987   66178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:29:14.241062   66178 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:29:11.810261   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:29:13.810391   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:29:14.243951   66178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:29:14.254209   66178 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:29:14.254241   66178 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:29:14.254269   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:29:14.254487   66178 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:29:14.254504   66178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:29:14.254521   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:29:14.258175   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:29:14.258727   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:29:14.258750   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:29:14.258917   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:29:14.258952   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:29:14.259197   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:29:14.259353   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:29:14.259396   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:29:14.259407   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:29:14.259499   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:29:14.259542   66178 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa Username:docker}
	I0717 01:29:14.259787   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:29:14.259929   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:29:14.260018   66178 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa Username:docker}
	I0717 01:29:14.263724   66178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0717 01:29:14.264124   66178 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:29:14.264622   66178 main.go:141] libmachine: Using API Version  1
	I0717 01:29:14.264647   66178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:29:14.264997   66178 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:29:14.265554   66178 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:29:14.265585   66178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:29:14.286084   66178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34483
	I0717 01:29:14.286518   66178 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:29:14.287089   66178 main.go:141] libmachine: Using API Version  1
	I0717 01:29:14.287108   66178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:29:14.287521   66178 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:29:14.287798   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetState
	I0717 01:29:14.289759   66178 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:29:14.290077   66178 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:29:14.290094   66178 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:29:14.290111   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:29:14.293229   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:29:14.293679   66178 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:28:48 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:29:14.293700   66178 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:29:14.293917   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:29:14.294087   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:29:14.294240   66178 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:29:14.294341   66178 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa Username:docker}
	I0717 01:29:14.444432   66178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:29:14.461279   66178 node_ready.go:35] waiting up to 6m0s for node "embed-certs-484167" to be "Ready" ...
	I0717 01:29:14.537235   66178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:29:14.560057   66178 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:29:14.560082   66178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:29:14.582614   66178 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:29:14.582642   66178 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:29:14.600298   66178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:29:14.603017   66178 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:29:14.603042   66178 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:29:14.651983   66178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:29:15.539810   66178 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.002533434s)
	I0717 01:29:15.539842   66178 main.go:141] libmachine: Making call to close driver server
	I0717 01:29:15.539858   66178 main.go:141] libmachine: (embed-certs-484167) Calling .Close
	I0717 01:29:15.539860   66178 main.go:141] libmachine: Making call to close driver server
	I0717 01:29:15.539873   66178 main.go:141] libmachine: (embed-certs-484167) Calling .Close
	I0717 01:29:15.540162   66178 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:29:15.540214   66178 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:29:15.540237   66178 main.go:141] libmachine: Making call to close driver server
	I0717 01:29:15.540255   66178 main.go:141] libmachine: (embed-certs-484167) Calling .Close
	I0717 01:29:15.540301   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Closing plugin on server side
	I0717 01:29:15.540315   66178 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:29:15.540338   66178 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:29:15.540357   66178 main.go:141] libmachine: Making call to close driver server
	I0717 01:29:15.540370   66178 main.go:141] libmachine: (embed-certs-484167) Calling .Close
	I0717 01:29:15.540483   66178 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:29:15.540498   66178 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:29:15.540498   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Closing plugin on server side
	I0717 01:29:15.540610   66178 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:29:15.540621   66178 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:29:15.540645   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Closing plugin on server side
	I0717 01:29:15.546802   66178 main.go:141] libmachine: Making call to close driver server
	I0717 01:29:15.546817   66178 main.go:141] libmachine: (embed-certs-484167) Calling .Close
	I0717 01:29:15.547041   66178 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:29:15.547056   66178 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:29:15.547055   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Closing plugin on server side
	I0717 01:29:15.573270   66178 main.go:141] libmachine: Making call to close driver server
	I0717 01:29:15.573295   66178 main.go:141] libmachine: (embed-certs-484167) Calling .Close
	I0717 01:29:15.573546   66178 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:29:15.573565   66178 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:29:15.573573   66178 main.go:141] libmachine: Making call to close driver server
	I0717 01:29:15.573579   66178 main.go:141] libmachine: (embed-certs-484167) Calling .Close
	I0717 01:29:15.573581   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Closing plugin on server side
	I0717 01:29:15.573804   66178 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:29:15.573818   66178 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:29:15.573827   66178 addons.go:475] Verifying addon metrics-server=true in "embed-certs-484167"
	I0717 01:29:15.573841   66178 main.go:141] libmachine: (embed-certs-484167) DBG | Closing plugin on server side
	I0717 01:29:15.575709   66178 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:29:17.120217   64655 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:29:17.120324   64655 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 01:29:17.122004   64655 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:29:17.122074   64655 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:29:17.122162   64655 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:29:17.122282   64655 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:29:17.122404   64655 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:29:17.122483   64655 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:29:17.124214   64655 out.go:204]   - Generating certificates and keys ...
	I0717 01:29:17.124279   64655 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:29:17.124338   64655 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:29:17.124407   64655 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:29:17.124491   64655 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:29:17.124610   64655 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:29:17.124677   64655 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:29:17.124743   64655 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:29:17.124791   64655 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:29:17.124858   64655 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:29:17.124945   64655 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:29:17.125015   64655 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:29:17.125090   64655 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:29:17.125161   64655 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:29:17.125207   64655 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:29:17.125260   64655 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:29:17.125328   64655 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:29:17.125487   64655 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:29:17.125610   64655 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:29:17.125674   64655 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:29:17.125800   64655 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:29:12.354861   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:12.355242   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:12.355279   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:12.355211   67801 retry.go:31] will retry after 2.873667991s: waiting for machine to come up
	I0717 01:29:15.232408   67712 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:29:15.232867   67712 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:29:15.232899   67712 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:29:15.232844   67801 retry.go:31] will retry after 4.719830735s: waiting for machine to come up
	I0717 01:29:15.576990   66178 addons.go:510] duration metric: took 1.384188689s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:29:16.465834   66178 node_ready.go:53] node "embed-certs-484167" has status "Ready":"False"
	I0717 01:29:17.127076   64655 out.go:204]   - Booting up control plane ...
	I0717 01:29:17.127167   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:29:17.127254   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:29:17.127335   64655 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:29:17.127426   64655 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:29:17.127553   64655 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:29:17.127592   64655 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:29:17.127645   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.127791   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.127843   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.128024   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.128133   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.128292   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.128355   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.128498   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.128569   64655 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:29:17.128729   64655 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:29:17.128737   64655 kubeadm.go:310] 
	I0717 01:29:17.128767   64655 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:29:17.128800   64655 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:29:17.128805   64655 kubeadm.go:310] 
	I0717 01:29:17.128849   64655 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:29:17.128908   64655 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:29:17.129063   64655 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:29:17.129074   64655 kubeadm.go:310] 
	I0717 01:29:17.129204   64655 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:29:17.129235   64655 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:29:17.129274   64655 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:29:17.129288   64655 kubeadm.go:310] 
	I0717 01:29:17.129388   64655 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:29:17.129469   64655 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:29:17.129482   64655 kubeadm.go:310] 
	I0717 01:29:17.129625   64655 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:29:17.129754   64655 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:29:17.129861   64655 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:29:17.129946   64655 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:29:17.130043   64655 kubeadm.go:394] duration metric: took 7m56.056343168s to StartCluster
	I0717 01:29:17.130056   64655 kubeadm.go:310] 
	I0717 01:29:17.130084   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:29:17.130150   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:29:17.170463   64655 cri.go:89] found id: ""
	I0717 01:29:17.170486   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.170496   64655 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:29:17.170502   64655 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:29:17.170553   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:29:17.204996   64655 cri.go:89] found id: ""
	I0717 01:29:17.205021   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.205028   64655 logs.go:278] No container was found matching "etcd"
	I0717 01:29:17.205034   64655 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:29:17.205087   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:29:17.239200   64655 cri.go:89] found id: ""
	I0717 01:29:17.239232   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.239241   64655 logs.go:278] No container was found matching "coredns"
	I0717 01:29:17.239248   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:29:17.239298   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:29:17.274065   64655 cri.go:89] found id: ""
	I0717 01:29:17.274096   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.274104   64655 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:29:17.274112   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:29:17.274170   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:29:17.312132   64655 cri.go:89] found id: ""
	I0717 01:29:17.312161   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.312172   64655 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:29:17.312181   64655 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:29:17.312254   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:29:17.347520   64655 cri.go:89] found id: ""
	I0717 01:29:17.347559   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.347569   64655 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:29:17.347580   64655 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:29:17.347638   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:29:17.386989   64655 cri.go:89] found id: ""
	I0717 01:29:17.387021   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.387032   64655 logs.go:278] No container was found matching "kindnet"
	I0717 01:29:17.387040   64655 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:29:17.387103   64655 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:29:17.421790   64655 cri.go:89] found id: ""
	I0717 01:29:17.421815   64655 logs.go:276] 0 containers: []
	W0717 01:29:17.421822   64655 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:29:17.421831   64655 logs.go:123] Gathering logs for kubelet ...
	I0717 01:29:17.421843   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:29:17.473599   64655 logs.go:123] Gathering logs for dmesg ...
	I0717 01:29:17.473628   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:29:17.488496   64655 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:29:17.488530   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:29:17.566512   64655 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:29:17.566541   64655 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:29:17.566559   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:29:17.677372   64655 logs.go:123] Gathering logs for container status ...
	I0717 01:29:17.677409   64655 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 01:29:17.725383   64655 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 01:29:17.725434   64655 out.go:239] * 
	W0717 01:29:17.725496   64655 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:29:17.725529   64655 out.go:239] * 
	W0717 01:29:17.726376   64655 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:29:17.729540   64655 out.go:177] 
	W0717 01:29:17.730940   64655 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:29:17.730995   64655 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 01:29:17.731022   64655 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 01:29:17.732408   64655 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.784576023Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179758784554678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7fe21a7-06c7-4e9e-aa81-a172fec252e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.785092187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1efa6cf9-b36b-4e86-b7be-4173921908e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.785141579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1efa6cf9-b36b-4e86-b7be-4173921908e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.785174519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1efa6cf9-b36b-4e86-b7be-4173921908e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.825175925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06cc3e33-3a0c-418a-bac4-a39dd60fe0f9 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.825309898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06cc3e33-3a0c-418a-bac4-a39dd60fe0f9 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.826501415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48d96fdc-a86d-4fdb-b4ae-6c564f027b54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.827012870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179758826981271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48d96fdc-a86d-4fdb-b4ae-6c564f027b54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.827784673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6a61dd5-311f-4d19-ae42-ca3c97a1919b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.827876597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6a61dd5-311f-4d19-ae42-ca3c97a1919b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.827926467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d6a61dd5-311f-4d19-ae42-ca3c97a1919b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.861080515Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b172b8f7-cad3-4270-af28-629eb7d74978 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.861148274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b172b8f7-cad3-4270-af28-629eb7d74978 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.862733981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5973c432-2132-4687-ad3a-00bea766744e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.863108643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179758863089541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5973c432-2132-4687-ad3a-00bea766744e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.863954272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8e0ef64-c013-44cd-b971-1c7b0f3aae2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.864001865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8e0ef64-c013-44cd-b971-1c7b0f3aae2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.864041010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c8e0ef64-c013-44cd-b971-1c7b0f3aae2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.898087760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc2d7494-c477-4256-95c8-71c2fb973b6a name=/runtime.v1.RuntimeService/Version
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.898168709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc2d7494-c477-4256-95c8-71c2fb973b6a name=/runtime.v1.RuntimeService/Version
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.908591729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db74ceb8-1389-4e0c-8632-3e8e4de37be1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.908979960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179758908954853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db74ceb8-1389-4e0c-8632-3e8e4de37be1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.909939089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c97889d-fcb1-4a76-b96f-85d15f356021 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.909987412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c97889d-fcb1-4a76-b96f-85d15f356021 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:29:18 old-k8s-version-249342 crio[653]: time="2024-07-17 01:29:18.910026505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7c97889d-fcb1-4a76-b96f-85d15f356021 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 01:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053856] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.738175] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul17 01:21] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586475] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.258109] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060071] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055484] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.214160] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.115956] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.256032] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +6.048119] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.063005] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.184126] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[ +10.091636] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 01:25] systemd-fstab-generator[5033]: Ignoring "noauto" option for root device
	[Jul17 01:27] systemd-fstab-generator[5317]: Ignoring "noauto" option for root device
	[  +0.061098] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:29:19 up 8 min,  0 users,  load average: 0.00, 0.06, 0.04
	Linux old-k8s-version-249342 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc0000e0700, 0x4f04d00, 0xc000b9fa90)
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0009146f0)
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c87ef0, 0x4f0ac20, 0xc000993180, 0x1, 0xc0001020c0)
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e0700, 0xc0001020c0)
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a26ba0, 0xc000bd89c0)
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 17 01:29:18 old-k8s-version-249342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 17 01:29:18 old-k8s-version-249342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 01:29:18 old-k8s-version-249342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5586]: I0717 01:29:18.753843    5586 server.go:416] Version: v1.20.0
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5586]: I0717 01:29:18.754394    5586 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5586]: I0717 01:29:18.759313    5586 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5586]: W0717 01:29:18.760504    5586 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 01:29:18 old-k8s-version-249342 kubelet[5586]: I0717 01:29:18.760735    5586 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 2 (237.702634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-249342" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (522.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-484167 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-484167 --alsologtostderr -v=3: exit status 82 (2m0.54925255s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-484167"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:21:01.266523   65068 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:21:01.266663   65068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:21:01.266675   65068 out.go:304] Setting ErrFile to fd 2...
	I0717 01:21:01.266681   65068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:21:01.266874   65068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:21:01.267108   65068 out.go:298] Setting JSON to false
	I0717 01:21:01.267192   65068 mustload.go:65] Loading cluster: embed-certs-484167
	I0717 01:21:01.267564   65068 config.go:182] Loaded profile config "embed-certs-484167": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:21:01.267642   65068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/embed-certs-484167/config.json ...
	I0717 01:21:01.267789   65068 mustload.go:65] Loading cluster: embed-certs-484167
	I0717 01:21:01.267888   65068 config.go:182] Loaded profile config "embed-certs-484167": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:21:01.267911   65068 stop.go:39] StopHost: embed-certs-484167
	I0717 01:21:01.268333   65068 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:21:01.268374   65068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:21:01.283684   65068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34685
	I0717 01:21:01.284152   65068 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:21:01.284812   65068 main.go:141] libmachine: Using API Version  1
	I0717 01:21:01.284844   65068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:21:01.285232   65068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:21:01.287598   65068 out.go:177] * Stopping node "embed-certs-484167"  ...
	I0717 01:21:01.288901   65068 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 01:21:01.288937   65068 main.go:141] libmachine: (embed-certs-484167) Calling .DriverName
	I0717 01:21:01.289155   65068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 01:21:01.289179   65068 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHHostname
	I0717 01:21:01.292170   65068 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:21:01.292621   65068 main.go:141] libmachine: (embed-certs-484167) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:68:c9", ip: ""} in network mk-embed-certs-484167: {Iface:virbr4 ExpiryTime:2024-07-17 02:20:07 +0000 UTC Type:0 Mac:52:54:00:cf:68:c9 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:embed-certs-484167 Clientid:01:52:54:00:cf:68:c9}
	I0717 01:21:01.292660   65068 main.go:141] libmachine: (embed-certs-484167) DBG | domain embed-certs-484167 has defined IP address 192.168.72.48 and MAC address 52:54:00:cf:68:c9 in network mk-embed-certs-484167
	I0717 01:21:01.292901   65068 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHPort
	I0717 01:21:01.293047   65068 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHKeyPath
	I0717 01:21:01.293161   65068 main.go:141] libmachine: (embed-certs-484167) Calling .GetSSHUsername
	I0717 01:21:01.293334   65068 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/embed-certs-484167/id_rsa Username:docker}
	I0717 01:21:01.424950   65068 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 01:21:01.492748   65068 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 01:21:01.550331   65068 main.go:141] libmachine: Stopping "embed-certs-484167"...
	I0717 01:21:01.550362   65068 main.go:141] libmachine: (embed-certs-484167) Calling .GetState
	I0717 01:21:01.552184   65068 main.go:141] libmachine: (embed-certs-484167) Calling .Stop
	I0717 01:21:01.556065   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 0/120
	I0717 01:21:02.558440   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 1/120
	I0717 01:21:03.559714   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 2/120
	I0717 01:21:04.561367   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 3/120
	I0717 01:21:05.563000   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 4/120
	I0717 01:21:06.565007   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 5/120
	I0717 01:21:07.567144   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 6/120
	I0717 01:21:08.568515   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 7/120
	I0717 01:21:09.569957   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 8/120
	I0717 01:21:10.571129   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 9/120
	I0717 01:21:11.573494   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 10/120
	I0717 01:21:12.574697   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 11/120
	I0717 01:21:13.576302   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 12/120
	I0717 01:21:14.577913   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 13/120
	I0717 01:21:15.580093   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 14/120
	I0717 01:21:16.581348   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 15/120
	I0717 01:21:17.583431   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 16/120
	I0717 01:21:18.585281   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 17/120
	I0717 01:21:19.587555   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 18/120
	I0717 01:21:20.589362   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 19/120
	I0717 01:21:21.591633   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 20/120
	I0717 01:21:22.593302   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 21/120
	I0717 01:21:23.594967   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 22/120
	I0717 01:21:24.596656   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 23/120
	I0717 01:21:25.598478   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 24/120
	I0717 01:21:26.600230   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 25/120
	I0717 01:21:27.601626   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 26/120
	I0717 01:21:28.603002   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 27/120
	I0717 01:21:29.605527   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 28/120
	I0717 01:21:30.607636   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 29/120
	I0717 01:21:31.609543   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 30/120
	I0717 01:21:32.610901   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 31/120
	I0717 01:21:33.613182   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 32/120
	I0717 01:21:34.614998   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 33/120
	I0717 01:21:35.616541   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 34/120
	I0717 01:21:36.618528   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 35/120
	I0717 01:21:37.620724   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 36/120
	I0717 01:21:38.622989   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 37/120
	I0717 01:21:39.624445   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 38/120
	I0717 01:21:40.625937   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 39/120
	I0717 01:21:41.628072   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 40/120
	I0717 01:21:42.630189   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 41/120
	I0717 01:21:43.631955   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 42/120
	I0717 01:21:44.633193   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 43/120
	I0717 01:21:45.635071   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 44/120
	I0717 01:21:46.637200   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 45/120
	I0717 01:21:47.638914   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 46/120
	I0717 01:21:48.640463   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 47/120
	I0717 01:21:49.641793   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 48/120
	I0717 01:21:50.643841   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 49/120
	I0717 01:21:51.645445   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 50/120
	I0717 01:21:52.646700   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 51/120
	I0717 01:21:53.648071   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 52/120
	I0717 01:21:54.649288   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 53/120
	I0717 01:21:55.651276   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 54/120
	I0717 01:21:56.652827   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 55/120
	I0717 01:21:57.655115   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 56/120
	I0717 01:21:58.656445   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 57/120
	I0717 01:21:59.657752   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 58/120
	I0717 01:22:00.659190   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 59/120
	I0717 01:22:01.661154   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 60/120
	I0717 01:22:02.662545   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 61/120
	I0717 01:22:03.663779   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 62/120
	I0717 01:22:04.665072   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 63/120
	I0717 01:22:05.666328   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 64/120
	I0717 01:22:06.668052   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 65/120
	I0717 01:22:07.669392   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 66/120
	I0717 01:22:08.670974   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 67/120
	I0717 01:22:09.672350   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 68/120
	I0717 01:22:10.674005   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 69/120
	I0717 01:22:11.676415   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 70/120
	I0717 01:22:12.678596   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 71/120
	I0717 01:22:13.680215   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 72/120
	I0717 01:22:14.681663   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 73/120
	I0717 01:22:15.682968   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 74/120
	I0717 01:22:16.684956   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 75/120
	I0717 01:22:17.686451   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 76/120
	I0717 01:22:18.687595   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 77/120
	I0717 01:22:19.689178   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 78/120
	I0717 01:22:20.691057   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 79/120
	I0717 01:22:21.693438   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 80/120
	I0717 01:22:22.695136   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 81/120
	I0717 01:22:23.696674   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 82/120
	I0717 01:22:24.697801   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 83/120
	I0717 01:22:25.699070   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 84/120
	I0717 01:22:26.700908   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 85/120
	I0717 01:22:27.702447   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 86/120
	I0717 01:22:28.703955   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 87/120
	I0717 01:22:29.705384   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 88/120
	I0717 01:22:30.707032   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 89/120
	I0717 01:22:31.709365   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 90/120
	I0717 01:22:32.711788   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 91/120
	I0717 01:22:33.713272   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 92/120
	I0717 01:22:34.715119   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 93/120
	I0717 01:22:35.716495   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 94/120
	I0717 01:22:36.718371   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 95/120
	I0717 01:22:37.720822   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 96/120
	I0717 01:22:38.723251   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 97/120
	I0717 01:22:39.725045   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 98/120
	I0717 01:22:40.727359   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 99/120
	I0717 01:22:41.729847   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 100/120
	I0717 01:22:42.731221   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 101/120
	I0717 01:22:43.732806   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 102/120
	I0717 01:22:44.734977   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 103/120
	I0717 01:22:45.736344   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 104/120
	I0717 01:22:46.738132   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 105/120
	I0717 01:22:47.739602   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 106/120
	I0717 01:22:48.741048   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 107/120
	I0717 01:22:49.742993   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 108/120
	I0717 01:22:50.744219   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 109/120
	I0717 01:22:51.746245   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 110/120
	I0717 01:22:52.747633   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 111/120
	I0717 01:22:53.748968   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 112/120
	I0717 01:22:54.751169   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 113/120
	I0717 01:22:55.752380   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 114/120
	I0717 01:22:56.754261   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 115/120
	I0717 01:22:57.755985   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 116/120
	I0717 01:22:58.757374   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 117/120
	I0717 01:22:59.759236   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 118/120
	I0717 01:23:00.760662   65068 main.go:141] libmachine: (embed-certs-484167) Waiting for machine to stop 119/120
	I0717 01:23:01.762022   65068 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 01:23:01.762093   65068 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 01:23:01.763533   65068 out.go:177] 
	W0717 01:23:01.764839   65068 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 01:23:01.764852   65068 out.go:239] * 
	* 
	W0717 01:23:01.767859   65068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:23:01.769078   65068 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-484167 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167: exit status 3 (18.438444492s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:23:20.208870   65968 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host
	E0717 01:23:20.208888   65968 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-484167" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-945694 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-945694 --alsologtostderr -v=3: exit status 82 (2m0.489853405s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-945694"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:22:23.160452   65801 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:22:23.160698   65801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:22:23.160707   65801 out.go:304] Setting ErrFile to fd 2...
	I0717 01:22:23.160712   65801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:22:23.160923   65801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:22:23.161148   65801 out.go:298] Setting JSON to false
	I0717 01:22:23.161218   65801 mustload.go:65] Loading cluster: default-k8s-diff-port-945694
	I0717 01:22:23.161511   65801 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:22:23.161578   65801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/config.json ...
	I0717 01:22:23.161766   65801 mustload.go:65] Loading cluster: default-k8s-diff-port-945694
	I0717 01:22:23.161861   65801 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:22:23.161885   65801 stop.go:39] StopHost: default-k8s-diff-port-945694
	I0717 01:22:23.162240   65801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:22:23.162279   65801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:22:23.177213   65801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34165
	I0717 01:22:23.177756   65801 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:22:23.178384   65801 main.go:141] libmachine: Using API Version  1
	I0717 01:22:23.178410   65801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:22:23.178796   65801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:22:23.181219   65801 out.go:177] * Stopping node "default-k8s-diff-port-945694"  ...
	I0717 01:22:23.182903   65801 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 01:22:23.182949   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:22:23.183226   65801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 01:22:23.183255   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:22:23.186333   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:22:23.186789   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:21:28 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:22:23.186824   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:22:23.187011   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:22:23.187184   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:22:23.187324   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:22:23.187450   65801 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:22:23.286730   65801 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 01:22:23.344490   65801 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 01:22:23.403372   65801 main.go:141] libmachine: Stopping "default-k8s-diff-port-945694"...
	I0717 01:22:23.403406   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:22:23.405131   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Stop
	I0717 01:22:23.408676   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 0/120
	I0717 01:22:24.410063   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 1/120
	I0717 01:22:25.411268   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 2/120
	I0717 01:22:26.412784   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 3/120
	I0717 01:22:27.414106   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 4/120
	I0717 01:22:28.416319   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 5/120
	I0717 01:22:29.417992   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 6/120
	I0717 01:22:30.419319   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 7/120
	I0717 01:22:31.420715   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 8/120
	I0717 01:22:32.423181   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 9/120
	I0717 01:22:33.425219   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 10/120
	I0717 01:22:34.426680   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 11/120
	I0717 01:22:35.428197   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 12/120
	I0717 01:22:36.429512   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 13/120
	I0717 01:22:37.431196   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 14/120
	I0717 01:22:38.432682   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 15/120
	I0717 01:22:39.435146   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 16/120
	I0717 01:22:40.436378   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 17/120
	I0717 01:22:41.437651   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 18/120
	I0717 01:22:42.438982   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 19/120
	I0717 01:22:43.441538   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 20/120
	I0717 01:22:44.442794   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 21/120
	I0717 01:22:45.444221   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 22/120
	I0717 01:22:46.445380   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 23/120
	I0717 01:22:47.446933   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 24/120
	I0717 01:22:48.448928   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 25/120
	I0717 01:22:49.450230   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 26/120
	I0717 01:22:50.451761   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 27/120
	I0717 01:22:51.453077   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 28/120
	I0717 01:22:52.454885   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 29/120
	I0717 01:22:53.456728   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 30/120
	I0717 01:22:54.458390   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 31/120
	I0717 01:22:55.459577   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 32/120
	I0717 01:22:56.460925   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 33/120
	I0717 01:22:57.462298   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 34/120
	I0717 01:22:58.464316   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 35/120
	I0717 01:22:59.465608   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 36/120
	I0717 01:23:00.466863   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 37/120
	I0717 01:23:01.468359   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 38/120
	I0717 01:23:02.469740   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 39/120
	I0717 01:23:03.471635   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 40/120
	I0717 01:23:04.473167   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 41/120
	I0717 01:23:05.474815   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 42/120
	I0717 01:23:06.476023   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 43/120
	I0717 01:23:07.477442   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 44/120
	I0717 01:23:08.479464   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 45/120
	I0717 01:23:09.481007   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 46/120
	I0717 01:23:10.482485   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 47/120
	I0717 01:23:11.483789   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 48/120
	I0717 01:23:12.486301   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 49/120
	I0717 01:23:13.488075   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 50/120
	I0717 01:23:14.489708   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 51/120
	I0717 01:23:15.491147   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 52/120
	I0717 01:23:16.492564   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 53/120
	I0717 01:23:17.494016   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 54/120
	I0717 01:23:18.495822   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 55/120
	I0717 01:23:19.497259   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 56/120
	I0717 01:23:20.499030   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 57/120
	I0717 01:23:21.500731   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 58/120
	I0717 01:23:22.502137   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 59/120
	I0717 01:23:23.504173   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 60/120
	I0717 01:23:24.505893   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 61/120
	I0717 01:23:25.507249   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 62/120
	I0717 01:23:26.508383   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 63/120
	I0717 01:23:27.509675   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 64/120
	I0717 01:23:28.511586   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 65/120
	I0717 01:23:29.513006   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 66/120
	I0717 01:23:30.514380   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 67/120
	I0717 01:23:31.515786   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 68/120
	I0717 01:23:32.517225   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 69/120
	I0717 01:23:33.519395   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 70/120
	I0717 01:23:34.520750   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 71/120
	I0717 01:23:35.523045   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 72/120
	I0717 01:23:36.524340   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 73/120
	I0717 01:23:37.525758   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 74/120
	I0717 01:23:38.527623   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 75/120
	I0717 01:23:39.529609   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 76/120
	I0717 01:23:40.531182   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 77/120
	I0717 01:23:41.532534   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 78/120
	I0717 01:23:42.533839   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 79/120
	I0717 01:23:43.535886   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 80/120
	I0717 01:23:44.537292   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 81/120
	I0717 01:23:45.538791   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 82/120
	I0717 01:23:46.540301   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 83/120
	I0717 01:23:47.541629   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 84/120
	I0717 01:23:48.543745   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 85/120
	I0717 01:23:49.545320   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 86/120
	I0717 01:23:50.546727   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 87/120
	I0717 01:23:51.548075   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 88/120
	I0717 01:23:52.549704   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 89/120
	I0717 01:23:53.552245   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 90/120
	I0717 01:23:54.553821   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 91/120
	I0717 01:23:55.555167   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 92/120
	I0717 01:23:56.556602   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 93/120
	I0717 01:23:57.557893   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 94/120
	I0717 01:23:58.559600   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 95/120
	I0717 01:23:59.560974   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 96/120
	I0717 01:24:00.562353   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 97/120
	I0717 01:24:01.563771   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 98/120
	I0717 01:24:02.565209   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 99/120
	I0717 01:24:03.567270   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 100/120
	I0717 01:24:04.568980   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 101/120
	I0717 01:24:05.570552   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 102/120
	I0717 01:24:06.572258   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 103/120
	I0717 01:24:07.574881   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 104/120
	I0717 01:24:08.576729   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 105/120
	I0717 01:24:09.579194   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 106/120
	I0717 01:24:10.580623   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 107/120
	I0717 01:24:11.582063   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 108/120
	I0717 01:24:12.583436   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 109/120
	I0717 01:24:13.585549   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 110/120
	I0717 01:24:14.586926   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 111/120
	I0717 01:24:15.588198   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 112/120
	I0717 01:24:16.589627   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 113/120
	I0717 01:24:17.591124   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 114/120
	I0717 01:24:18.593177   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 115/120
	I0717 01:24:19.595050   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 116/120
	I0717 01:24:20.596372   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 117/120
	I0717 01:24:21.597780   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 118/120
	I0717 01:24:22.599411   65801 main.go:141] libmachine: (default-k8s-diff-port-945694) Waiting for machine to stop 119/120
	I0717 01:24:23.600599   65801 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 01:24:23.600661   65801 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 01:24:23.602514   65801 out.go:177] 
	W0717 01:24:23.603980   65801 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 01:24:23.604011   65801 out.go:239] * 
	* 
	W0717 01:24:23.607345   65801 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:24:23.608714   65801 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-945694 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694: exit status 3 (18.518461417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:24:42.128910   66439 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host
	E0717 01:24:42.128929   66439 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-945694" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167: exit status 3 (3.1680373s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:23:23.376930   66064 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host
	E0717 01:23:23.376951   66064 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-484167 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-484167 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153722589s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-484167 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167: exit status 3 (3.061871818s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:23:32.592929   66145 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host
	E0717 01:23:32.592951   66145 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-484167" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694: exit status 3 (3.167312926s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:24:45.296856   66534 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host
	E0717 01:24:45.296876   66534 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-945694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-945694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152949647s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-945694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694: exit status 3 (3.062938256s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:24:54.512924   66613 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host
	E0717 01:24:54.512950   66613 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.30:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-945694" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
E0717 01:30:41.785562   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
E0717 01:32:12.451680   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
E0717 01:34:18.739035   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
E0717 01:37:12.450925   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 2 (240.465559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-249342" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 2 (227.758605ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-249342 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-621535                              | stopped-upgrade-621535       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:19 UTC |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729236                           | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-249342             | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:22 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-484167            | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC | 17 Jul 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-945694  | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC | 17 Jul 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-484167                 | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC | 17 Jul 24 01:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-945694       | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:34 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:28 UTC |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:30 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-818382             | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:30 UTC | 17 Jul 24 01:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-818382                                   | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-818382                  | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:32:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:32:43.547613   69161 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:32:43.547856   69161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:43.547865   69161 out.go:304] Setting ErrFile to fd 2...
	I0717 01:32:43.547869   69161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:43.548058   69161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:32:43.548591   69161 out.go:298] Setting JSON to false
	I0717 01:32:43.549476   69161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8113,"bootTime":1721171851,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:32:43.549531   69161 start.go:139] virtualization: kvm guest
	I0717 01:32:43.551667   69161 out.go:177] * [no-preload-818382] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:32:43.552978   69161 notify.go:220] Checking for updates...
	I0717 01:32:43.553027   69161 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:32:43.554498   69161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:32:43.555767   69161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:32:43.557080   69161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:32:43.558402   69161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:32:43.559566   69161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:32:43.561137   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:32:43.561542   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.561591   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.576810   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0717 01:32:43.577217   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.577724   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.577746   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.578068   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.578246   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.578474   69161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:32:43.578722   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.578751   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.593634   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0717 01:32:43.594007   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.594435   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.594460   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.594810   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.594984   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.632126   69161 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:32:43.633290   69161 start.go:297] selected driver: kvm2
	I0717 01:32:43.633305   69161 start.go:901] validating driver "kvm2" against &{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:43.633393   69161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:32:43.634018   69161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.634085   69161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:32:43.648838   69161 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:32:43.649342   69161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:32:43.649377   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:32:43.649388   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:32:43.649454   69161 start.go:340] cluster config:
	{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:43.649575   69161 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.651213   69161 out.go:177] * Starting "no-preload-818382" primary control-plane node in "no-preload-818382" cluster
	I0717 01:32:43.652698   69161 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:32:43.652866   69161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:32:43.652971   69161 cache.go:107] acquiring lock: {Name:mk0dda4d4cdd92722b746ab931e6544cfc8daee5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.652980   69161 cache.go:107] acquiring lock: {Name:mk1de3a52aa61e3b4e847379240ac3935bedb199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653004   69161 cache.go:107] acquiring lock: {Name:mkf6e5b69e84ed3f384772a188b9364b7e3d5b5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653072   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 01:32:43.653091   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0717 01:32:43.653102   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0717 01:32:43.653107   69161 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 146.502µs
	I0717 01:32:43.653119   69161 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653117   69161 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 121.37µs
	I0717 01:32:43.653137   69161 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653098   69161 cache.go:107] acquiring lock: {Name:mkf2f11535addf893c2faa84c376231e8d922e64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653127   69161 cache.go:107] acquiring lock: {Name:mk0f717937d10c133c40dfa3d731090d6e186c8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653157   69161 cache.go:107] acquiring lock: {Name:mkddaaee919763be73bfba0c581555b8cc97a67b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653143   69161 cache.go:107] acquiring lock: {Name:mkecaf352dd381368806d2a149fd31f0c349a680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653184   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 exists
	I0717 01:32:43.653170   69161 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:32:43.653201   69161 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0" took 76.404µs
	I0717 01:32:43.653211   69161 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0717 01:32:43.653256   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0717 01:32:43.653259   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0717 01:32:43.653270   69161 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 131.092µs
	I0717 01:32:43.653278   69161 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653278   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0717 01:32:43.653273   69161 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 220.448µs
	I0717 01:32:43.653293   69161 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0717 01:32:43.653292   69161 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 138.342µs
	I0717 01:32:43.653303   69161 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0717 01:32:43.653142   69161 cache.go:107] acquiring lock: {Name:mk2ca5e82f37242a4f02d1776db6559bdb43421e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653316   69161 start.go:364] duration metric: took 84.706µs to acquireMachinesLock for "no-preload-818382"
	I0717 01:32:43.653101   69161 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 132.422µs
	I0717 01:32:43.653358   69161 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:32:43.653360   69161 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 01:32:43.653365   69161 fix.go:54] fixHost starting: 
	I0717 01:32:43.653345   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0717 01:32:43.653380   69161 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 247.182µs
	I0717 01:32:43.653397   69161 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653413   69161 cache.go:87] Successfully saved all images to host disk.
	I0717 01:32:43.653791   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.653851   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.669140   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0717 01:32:43.669544   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.669975   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.669995   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.670285   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.670451   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.670597   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:32:43.672083   69161 fix.go:112] recreateIfNeeded on no-preload-818382: state=Running err=<nil>
	W0717 01:32:43.672118   69161 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:32:43.674037   69161 out.go:177] * Updating the running kvm2 "no-preload-818382" VM ...
	I0717 01:32:40.312635   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:42.810125   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:44.006444   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:46.006933   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:43.675220   69161 machine.go:94] provisionDockerMachine start ...
	I0717 01:32:43.675236   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.675410   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:32:43.677780   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:32:43.678159   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:29:11 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:32:43.678194   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:32:43.678285   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:32:43.678480   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:32:43.678635   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:32:43.678751   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:32:43.678900   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:43.679072   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:32:43.679082   69161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:32:46.576890   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:44.811604   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:47.310107   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:49.310610   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:48.007526   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:50.506280   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:49.648813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:51.310765   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:53.810052   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:53.007282   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:55.506679   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:57.506743   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:55.728954   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:55.810343   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:57.810539   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:00.007367   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.509717   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:58.800813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:59.810958   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.310473   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.804718   66659 pod_ready.go:81] duration metric: took 4m0.000441849s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:02.804758   66659 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 01:33:02.804776   66659 pod_ready.go:38] duration metric: took 4m11.542416864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:02.804800   66659 kubeadm.go:597] duration metric: took 4m19.055059195s to restartPrimaryControlPlane
	W0717 01:33:02.804851   66659 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 01:33:02.804875   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:33:05.008344   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:07.008631   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:04.880862   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:07.956811   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:09.506709   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:12.007454   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:14.007849   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:16.506348   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:17.072888   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:19.005817   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:21.006641   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:20.144862   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:23.007827   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:24.506621   66178 pod_ready.go:81] duration metric: took 4m0.006337956s for pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:24.506648   66178 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:33:24.506656   66178 pod_ready.go:38] duration metric: took 4m4.541684979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:24.506672   66178 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:24.506700   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:24.506752   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:24.553972   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:24.553994   66178 cri.go:89] found id: ""
	I0717 01:33:24.554003   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:24.554067   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.558329   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:24.558382   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:24.593681   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:24.593710   66178 cri.go:89] found id: ""
	I0717 01:33:24.593717   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:24.593764   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.598462   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:24.598521   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:24.638597   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:24.638617   66178 cri.go:89] found id: ""
	I0717 01:33:24.638624   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:24.638674   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.642611   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:24.642674   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:24.678207   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:24.678227   66178 cri.go:89] found id: ""
	I0717 01:33:24.678233   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:24.678284   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.682820   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:24.682884   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:24.724141   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:24.724170   66178 cri.go:89] found id: ""
	I0717 01:33:24.724179   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:24.724231   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.729301   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:24.729355   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:24.765894   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:24.765916   66178 cri.go:89] found id: ""
	I0717 01:33:24.765925   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:24.765970   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.770898   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:24.770951   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:24.805812   66178 cri.go:89] found id: ""
	I0717 01:33:24.805835   66178 logs.go:276] 0 containers: []
	W0717 01:33:24.805842   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:24.805848   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:24.805897   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:24.847766   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:24.847788   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:24.847794   66178 cri.go:89] found id: ""
	I0717 01:33:24.847802   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:24.847852   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.852045   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.856136   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:24.856161   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:24.892801   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:24.892829   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:24.944203   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:24.944236   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:25.482400   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:25.482440   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:25.544150   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:25.544190   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:25.559587   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:25.559620   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:25.679463   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:25.679488   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:25.725117   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:25.725144   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:25.771390   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:25.771417   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:25.818766   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:25.818792   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:25.861973   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:25.862008   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:25.899694   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:25.899723   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:25.937573   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:25.937604   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:26.224800   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:28.476050   66178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:28.491506   66178 api_server.go:72] duration metric: took 4m14.298590069s to wait for apiserver process to appear ...
	I0717 01:33:28.491527   66178 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:28.491568   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:28.491626   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:28.526854   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:28.526882   66178 cri.go:89] found id: ""
	I0717 01:33:28.526891   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:28.526957   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.531219   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:28.531282   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:28.567901   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:28.567927   66178 cri.go:89] found id: ""
	I0717 01:33:28.567937   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:28.567995   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.572030   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:28.572094   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:28.606586   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:28.606610   66178 cri.go:89] found id: ""
	I0717 01:33:28.606622   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:28.606679   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.611494   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:28.611555   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:28.647224   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:28.647247   66178 cri.go:89] found id: ""
	I0717 01:33:28.647255   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:28.647311   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.651314   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:28.651376   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:28.686387   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:28.686412   66178 cri.go:89] found id: ""
	I0717 01:33:28.686420   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:28.686473   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.691061   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:28.691128   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:28.728066   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:28.728091   66178 cri.go:89] found id: ""
	I0717 01:33:28.728099   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:28.728147   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.732397   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:28.732446   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:28.770233   66178 cri.go:89] found id: ""
	I0717 01:33:28.770261   66178 logs.go:276] 0 containers: []
	W0717 01:33:28.770270   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:28.770277   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:28.770338   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:28.806271   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:28.806296   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:28.806302   66178 cri.go:89] found id: ""
	I0717 01:33:28.806311   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:28.806371   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.810691   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.814958   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:28.814976   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:28.856685   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:28.856712   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:28.897748   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:28.897790   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:28.958202   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:28.958228   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:28.999474   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:28.999501   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:29.035726   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:29.035758   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:29.072498   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:29.072524   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:29.110199   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:29.110226   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:29.144474   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:29.144506   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:29.196286   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:29.196315   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:29.210251   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:29.210274   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:29.313845   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:29.313877   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:29.748683   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:29.748719   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:32.292005   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:33:32.296375   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 200:
	ok
	I0717 01:33:32.297480   66178 api_server.go:141] control plane version: v1.30.2
	I0717 01:33:32.297499   66178 api_server.go:131] duration metric: took 3.805966225s to wait for apiserver health ...
	I0717 01:33:32.297507   66178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:32.297528   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:32.297569   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:32.336526   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:32.336566   66178 cri.go:89] found id: ""
	I0717 01:33:32.336576   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:32.336629   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.340838   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:32.340904   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:32.375827   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:32.375853   66178 cri.go:89] found id: ""
	I0717 01:33:32.375862   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:32.375920   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.380212   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:32.380269   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:32.417036   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:32.417063   66178 cri.go:89] found id: ""
	I0717 01:33:32.417075   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:32.417140   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.421437   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:32.421507   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:32.455708   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:32.455732   66178 cri.go:89] found id: ""
	I0717 01:33:32.455741   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:32.455799   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.464218   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:32.464299   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:32.506931   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:32.506958   66178 cri.go:89] found id: ""
	I0717 01:33:32.506968   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:32.507030   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.511493   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:32.511562   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:32.554706   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:32.554731   66178 cri.go:89] found id: ""
	I0717 01:33:32.554741   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:32.554806   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.559101   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:32.559175   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:32.598078   66178 cri.go:89] found id: ""
	I0717 01:33:32.598113   66178 logs.go:276] 0 containers: []
	W0717 01:33:32.598126   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:32.598135   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:32.598209   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:29.300812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:34.426424   66659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.621528106s)
	I0717 01:33:34.426506   66659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:34.441446   66659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:33:34.451230   66659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:33:34.460682   66659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:33:34.460702   66659 kubeadm.go:157] found existing configuration files:
	
	I0717 01:33:34.460746   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:33:34.469447   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:33:34.469496   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:33:34.478412   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:33:34.487047   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:33:34.487096   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:33:34.496243   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:33:34.504852   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:33:34.504907   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:33:34.513592   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:33:34.521997   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:33:34.522048   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:33:34.530773   66659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:33:32.639086   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:32.639113   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:32.639119   66178 cri.go:89] found id: ""
	I0717 01:33:32.639127   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:32.639185   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.643404   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.648144   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:32.648165   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:32.700179   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:32.700212   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:33.091798   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:33.091840   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:33.142057   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:33.142095   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:33.197532   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:33.197567   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:33.248356   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:33.248393   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:33.290624   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:33.290652   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:33.338525   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:33.338557   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:33.379963   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:33.379998   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:33.393448   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:33.393472   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:33.497330   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:33.497366   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:33.534015   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:33.534048   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:33.569753   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:33.569779   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:36.112668   66178 system_pods.go:59] 8 kube-system pods found
	I0717 01:33:36.112698   66178 system_pods.go:61] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running
	I0717 01:33:36.112704   66178 system_pods.go:61] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running
	I0717 01:33:36.112710   66178 system_pods.go:61] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running
	I0717 01:33:36.112716   66178 system_pods.go:61] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running
	I0717 01:33:36.112721   66178 system_pods.go:61] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:33:36.112726   66178 system_pods.go:61] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running
	I0717 01:33:36.112734   66178 system_pods.go:61] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:36.112741   66178 system_pods.go:61] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:33:36.112752   66178 system_pods.go:74] duration metric: took 3.81523968s to wait for pod list to return data ...
	I0717 01:33:36.112760   66178 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:33:36.114860   66178 default_sa.go:45] found service account: "default"
	I0717 01:33:36.114880   66178 default_sa.go:55] duration metric: took 2.115012ms for default service account to be created ...
	I0717 01:33:36.114888   66178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:33:36.119333   66178 system_pods.go:86] 8 kube-system pods found
	I0717 01:33:36.119357   66178 system_pods.go:89] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running
	I0717 01:33:36.119363   66178 system_pods.go:89] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running
	I0717 01:33:36.119368   66178 system_pods.go:89] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running
	I0717 01:33:36.119372   66178 system_pods.go:89] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running
	I0717 01:33:36.119376   66178 system_pods.go:89] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:33:36.119382   66178 system_pods.go:89] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running
	I0717 01:33:36.119392   66178 system_pods.go:89] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:36.119401   66178 system_pods.go:89] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:33:36.119410   66178 system_pods.go:126] duration metric: took 4.516516ms to wait for k8s-apps to be running ...
	I0717 01:33:36.119423   66178 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:33:36.119469   66178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:36.135747   66178 system_svc.go:56] duration metric: took 16.316004ms WaitForService to wait for kubelet
	I0717 01:33:36.135778   66178 kubeadm.go:582] duration metric: took 4m21.94286469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:33:36.135806   66178 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:33:36.140253   66178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:33:36.140274   66178 node_conditions.go:123] node cpu capacity is 2
	I0717 01:33:36.140285   66178 node_conditions.go:105] duration metric: took 4.473888ms to run NodePressure ...
	I0717 01:33:36.140296   66178 start.go:241] waiting for startup goroutines ...
	I0717 01:33:36.140306   66178 start.go:246] waiting for cluster config update ...
	I0717 01:33:36.140326   66178 start.go:255] writing updated cluster config ...
	I0717 01:33:36.140642   66178 ssh_runner.go:195] Run: rm -f paused
	I0717 01:33:36.188858   66178 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:33:36.191016   66178 out.go:177] * Done! kubectl is now configured to use "embed-certs-484167" cluster and "default" namespace by default
	I0717 01:33:35.376822   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:38.448812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:34.720645   66659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:33:43.308866   66659 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:33:43.308943   66659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:33:43.309108   66659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:33:43.309260   66659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:33:43.309392   66659 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:33:43.309485   66659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:33:43.311060   66659 out.go:204]   - Generating certificates and keys ...
	I0717 01:33:43.311120   66659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:33:43.311229   66659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:33:43.311320   66659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:33:43.311396   66659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:33:43.311505   66659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:33:43.311595   66659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:33:43.311682   66659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:33:43.311746   66659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:33:43.311807   66659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:33:43.311893   66659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:33:43.311960   66659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:33:43.312019   66659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:33:43.312083   66659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:33:43.312165   66659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:33:43.312247   66659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:33:43.312337   66659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:33:43.312395   66659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:33:43.312479   66659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:33:43.312534   66659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:33:43.313917   66659 out.go:204]   - Booting up control plane ...
	I0717 01:33:43.313994   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:33:43.314085   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:33:43.314183   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:33:43.314304   66659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:33:43.314415   66659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:33:43.314471   66659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:33:43.314608   66659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:33:43.314728   66659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:33:43.314817   66659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00137795s
	I0717 01:33:43.314955   66659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:33:43.315048   66659 kubeadm.go:310] [api-check] The API server is healthy after 5.002451289s
	I0717 01:33:43.315206   66659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 01:33:43.315310   66659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 01:33:43.315364   66659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 01:33:43.315550   66659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-945694 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 01:33:43.315640   66659 kubeadm.go:310] [bootstrap-token] Using token: eqtrsf.jetqj440l3wkhk98
	I0717 01:33:43.317933   66659 out.go:204]   - Configuring RBAC rules ...
	I0717 01:33:43.318050   66659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 01:33:43.318148   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 01:33:43.318293   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 01:33:43.318405   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 01:33:43.318513   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 01:33:43.318599   66659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 01:33:43.318755   66659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 01:33:43.318831   66659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 01:33:43.318883   66659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 01:33:43.318890   66659 kubeadm.go:310] 
	I0717 01:33:43.318937   66659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 01:33:43.318945   66659 kubeadm.go:310] 
	I0717 01:33:43.319058   66659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 01:33:43.319068   66659 kubeadm.go:310] 
	I0717 01:33:43.319102   66659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 01:33:43.319189   66659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 01:33:43.319251   66659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 01:33:43.319257   66659 kubeadm.go:310] 
	I0717 01:33:43.319333   66659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 01:33:43.319343   66659 kubeadm.go:310] 
	I0717 01:33:43.319407   66659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 01:33:43.319416   66659 kubeadm.go:310] 
	I0717 01:33:43.319485   66659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 01:33:43.319607   66659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 01:33:43.319690   66659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 01:33:43.319698   66659 kubeadm.go:310] 
	I0717 01:33:43.319797   66659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 01:33:43.319910   66659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 01:33:43.319925   66659 kubeadm.go:310] 
	I0717 01:33:43.320045   66659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token eqtrsf.jetqj440l3wkhk98 \
	I0717 01:33:43.320187   66659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 01:33:43.320232   66659 kubeadm.go:310] 	--control-plane 
	I0717 01:33:43.320239   66659 kubeadm.go:310] 
	I0717 01:33:43.320349   66659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 01:33:43.320359   66659 kubeadm.go:310] 
	I0717 01:33:43.320469   66659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token eqtrsf.jetqj440l3wkhk98 \
	I0717 01:33:43.320642   66659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 01:33:43.320672   66659 cni.go:84] Creating CNI manager for ""
	I0717 01:33:43.320685   66659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:33:43.322373   66659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:33:43.323549   66659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:33:43.336069   66659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:33:43.354981   66659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:33:43.355060   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:43.355068   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-945694 minikube.k8s.io/updated_at=2024_07_17T01_33_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=default-k8s-diff-port-945694 minikube.k8s.io/primary=true
	I0717 01:33:43.564470   66659 ops.go:34] apiserver oom_adj: -16
	I0717 01:33:43.564611   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:44.065352   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:44.528766   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:47.604799   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:44.565059   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:45.065658   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:45.565085   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:46.064718   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:46.564689   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:47.064998   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:47.564664   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:48.064694   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:48.565187   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:49.065439   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:49.564950   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:50.065001   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:50.565505   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:51.065369   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:51.564969   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:52.065293   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:52.564953   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:53.065324   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:53.565120   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:54.065189   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:54.565611   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:55.065105   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:55.565494   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.065453   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.565393   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.656280   66659 kubeadm.go:1113] duration metric: took 13.301288619s to wait for elevateKubeSystemPrivileges
	I0717 01:33:56.656319   66659 kubeadm.go:394] duration metric: took 5m12.994113939s to StartCluster
	I0717 01:33:56.656341   66659 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:56.656429   66659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:33:56.658062   66659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:56.658318   66659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:33:56.658384   66659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:33:56.658471   66659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658506   66659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.658516   66659 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:33:56.658514   66659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658545   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.658544   66659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658565   66659 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:33:56.658566   66659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-945694"
	I0717 01:33:56.658590   66659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.658603   66659 addons.go:243] addon metrics-server should already be in state true
	I0717 01:33:56.658631   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.658840   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.658867   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.658941   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.658967   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.658946   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.659047   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.660042   66659 out.go:177] * Verifying Kubernetes components...
	I0717 01:33:56.661365   66659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:33:56.675427   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0717 01:33:56.675919   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.676434   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.676455   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.676887   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.677764   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.677807   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.678856   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44785
	I0717 01:33:56.679033   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0717 01:33:56.679281   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.679550   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.680055   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.680079   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.680153   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.680173   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.680443   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.680523   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.680711   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.681210   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.681252   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.684317   66659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.684338   66659 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:33:56.684362   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.684670   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.684706   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.693393   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0717 01:33:56.693836   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.694292   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.694309   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.694640   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.694801   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.696212   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.698217   66659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:33:56.699432   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:33:56.699455   66659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:33:56.699472   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.700565   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0717 01:33:56.701036   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.701563   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.701578   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.701920   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.702150   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.702903   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.703250   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.703275   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.703457   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.703732   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.703951   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.704282   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.704422   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.705576   66659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:33:56.707192   66659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:56.707207   66659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:33:56.707219   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.707551   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0717 01:33:56.708045   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.708589   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.708611   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.708957   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.709503   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.709545   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.710201   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.710818   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.710854   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.711103   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.711476   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.711751   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.711938   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.724041   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0717 01:33:56.724450   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.724943   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.724965   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.725264   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.725481   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.727357   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.727567   66659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:56.727579   66659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:33:56.727592   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.730575   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.730916   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.730930   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.731147   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.731295   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.731414   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.731558   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.880324   66659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:33:56.907224   66659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-945694" to be "Ready" ...
	I0717 01:33:56.916791   66659 node_ready.go:49] node "default-k8s-diff-port-945694" has status "Ready":"True"
	I0717 01:33:56.916814   66659 node_ready.go:38] duration metric: took 9.553813ms for node "default-k8s-diff-port-945694" to be "Ready" ...
	I0717 01:33:56.916825   66659 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:56.929744   66659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:56.991132   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:57.020549   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:33:57.020582   66659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:33:57.041856   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:57.095649   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:33:57.095672   66659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:33:57.145707   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:33:57.145737   66659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:33:57.220983   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:33:57.569863   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.569888   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.569966   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.569995   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570184   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570210   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.570221   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.570221   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570255   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570230   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570274   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570289   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.570314   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.570325   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570476   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570508   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570514   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.572038   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.572054   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.572095   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.584086   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.584114   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.584383   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.584402   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.951559   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.951583   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.952039   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.952039   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.952055   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.952068   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.952076   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.952317   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.952328   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.952338   66659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-945694"
	I0717 01:33:57.954803   66659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:33:53.680800   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:56.752809   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:57.956002   66659 addons.go:510] duration metric: took 1.29761252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:33:58.936404   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.936430   66659 pod_ready.go:81] duration metric: took 2.006657028s for pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.936440   66659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.940948   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.940968   66659 pod_ready.go:81] duration metric: took 4.522302ms for pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.940976   66659 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.944815   66659 pod_ready.go:92] pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.944830   66659 pod_ready.go:81] duration metric: took 3.847888ms for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.944838   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.949022   66659 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.949039   66659 pod_ready.go:81] duration metric: took 4.196556ms for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.949049   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.953438   66659 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.953456   66659 pod_ready.go:81] duration metric: took 4.401091ms for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.953467   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55xmv" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.335149   66659 pod_ready.go:92] pod "kube-proxy-55xmv" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:59.335174   66659 pod_ready.go:81] duration metric: took 381.700119ms for pod "kube-proxy-55xmv" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.335187   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.734445   66659 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:59.734473   66659 pod_ready.go:81] duration metric: took 399.276861ms for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.734483   66659 pod_ready.go:38] duration metric: took 2.817646454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:59.734499   66659 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:59.734557   66659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:59.750547   66659 api_server.go:72] duration metric: took 3.092197547s to wait for apiserver process to appear ...
	I0717 01:33:59.750573   66659 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:59.750595   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:33:59.755670   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0717 01:33:59.756553   66659 api_server.go:141] control plane version: v1.30.2
	I0717 01:33:59.756591   66659 api_server.go:131] duration metric: took 6.009468ms to wait for apiserver health ...
	I0717 01:33:59.756599   66659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:59.938573   66659 system_pods.go:59] 9 kube-system pods found
	I0717 01:33:59.938605   66659 system_pods.go:61] "coredns-7db6d8ff4d-jbsq5" [0a95f33d-19ef-4b2e-a94e-08bbcaff92dc] Running
	I0717 01:33:59.938611   66659 system_pods.go:61] "coredns-7db6d8ff4d-mqjqg" [ca27ce06-d171-4edd-9a1d-11898283f3ac] Running
	I0717 01:33:59.938615   66659 system_pods.go:61] "etcd-default-k8s-diff-port-945694" [213d53e1-92c9-4b8a-b9ff-6b7f12acd149] Running
	I0717 01:33:59.938618   66659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-945694" [b22e53fb-feec-4684-a672-f9c9b326bc36] Running
	I0717 01:33:59.938622   66659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-945694" [dc840bd9-5087-4642-8e84-8392d188e85f] Running
	I0717 01:33:59.938626   66659 system_pods.go:61] "kube-proxy-55xmv" [ee6913d5-3362-4a9f-a159-1f9b1da7380a] Running
	I0717 01:33:59.938631   66659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-945694" [7bfa8bdb-a9af-4e6b-8a11-f9b6791e2647] Running
	I0717 01:33:59.938640   66659 system_pods.go:61] "metrics-server-569cc877fc-4nffv" [ba214ec1-a180-42ec-847e-80464e102765] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:59.938646   66659 system_pods.go:61] "storage-provisioner" [3352a0de-41db-4537-b87a-24137084aa7a] Running
	I0717 01:33:59.938657   66659 system_pods.go:74] duration metric: took 182.050448ms to wait for pod list to return data ...
	I0717 01:33:59.938669   66659 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:34:00.133695   66659 default_sa.go:45] found service account: "default"
	I0717 01:34:00.133719   66659 default_sa.go:55] duration metric: took 195.042344ms for default service account to be created ...
	I0717 01:34:00.133729   66659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:34:00.338087   66659 system_pods.go:86] 9 kube-system pods found
	I0717 01:34:00.338127   66659 system_pods.go:89] "coredns-7db6d8ff4d-jbsq5" [0a95f33d-19ef-4b2e-a94e-08bbcaff92dc] Running
	I0717 01:34:00.338137   66659 system_pods.go:89] "coredns-7db6d8ff4d-mqjqg" [ca27ce06-d171-4edd-9a1d-11898283f3ac] Running
	I0717 01:34:00.338143   66659 system_pods.go:89] "etcd-default-k8s-diff-port-945694" [213d53e1-92c9-4b8a-b9ff-6b7f12acd149] Running
	I0717 01:34:00.338151   66659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-945694" [b22e53fb-feec-4684-a672-f9c9b326bc36] Running
	I0717 01:34:00.338159   66659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-945694" [dc840bd9-5087-4642-8e84-8392d188e85f] Running
	I0717 01:34:00.338166   66659 system_pods.go:89] "kube-proxy-55xmv" [ee6913d5-3362-4a9f-a159-1f9b1da7380a] Running
	I0717 01:34:00.338173   66659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-945694" [7bfa8bdb-a9af-4e6b-8a11-f9b6791e2647] Running
	I0717 01:34:00.338184   66659 system_pods.go:89] "metrics-server-569cc877fc-4nffv" [ba214ec1-a180-42ec-847e-80464e102765] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:34:00.338196   66659 system_pods.go:89] "storage-provisioner" [3352a0de-41db-4537-b87a-24137084aa7a] Running
	I0717 01:34:00.338205   66659 system_pods.go:126] duration metric: took 204.470489ms to wait for k8s-apps to be running ...
	I0717 01:34:00.338218   66659 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:34:00.338274   66659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:34:00.352151   66659 system_svc.go:56] duration metric: took 13.921542ms WaitForService to wait for kubelet
	I0717 01:34:00.352188   66659 kubeadm.go:582] duration metric: took 3.693843091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:34:00.352213   66659 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:34:00.535457   66659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:34:00.535478   66659 node_conditions.go:123] node cpu capacity is 2
	I0717 01:34:00.535489   66659 node_conditions.go:105] duration metric: took 183.271273ms to run NodePressure ...
	I0717 01:34:00.535500   66659 start.go:241] waiting for startup goroutines ...
	I0717 01:34:00.535506   66659 start.go:246] waiting for cluster config update ...
	I0717 01:34:00.535515   66659 start.go:255] writing updated cluster config ...
	I0717 01:34:00.535731   66659 ssh_runner.go:195] Run: rm -f paused
	I0717 01:34:00.581917   66659 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:34:00.583994   66659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-945694" cluster and "default" namespace by default
	I0717 01:34:02.832840   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:05.904845   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:11.984893   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:15.056813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:21.136802   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:24.208771   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:30.288821   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:33.360818   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:39.440802   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:42.512824   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:48.592870   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:51.668822   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:57.744791   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:00.816890   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:06.896783   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:09.968897   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:16.048887   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:19.120810   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:25.200832   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:28.272897   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:34.352811   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:37.424805   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:43.504775   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:46.576767   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:52.656845   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:55.728841   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:01.808828   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:04.880828   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:10.964781   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:14.032790   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:20.112803   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:23.184780   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:29.264888   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:32.340810   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:38.416815   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:41.488801   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:47.572801   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:50.640840   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:56.720825   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:59.792797   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:05.876784   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:08.944812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:15.024792   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:18.096815   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:21.098660   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:37:21.098691   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:21.098996   69161 buildroot.go:166] provisioning hostname "no-preload-818382"
	I0717 01:37:21.099019   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:21.099239   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:21.100820   69161 machine.go:97] duration metric: took 4m37.425586326s to provisionDockerMachine
	I0717 01:37:21.100856   69161 fix.go:56] duration metric: took 4m37.44749197s for fixHost
	I0717 01:37:21.100862   69161 start.go:83] releasing machines lock for "no-preload-818382", held for 4m37.447517491s
	W0717 01:37:21.100875   69161 start.go:714] error starting host: provision: host is not running
	W0717 01:37:21.100944   69161 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:37:21.100953   69161 start.go:729] Will try again in 5 seconds ...
	I0717 01:37:26.102733   69161 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:37:26.102820   69161 start.go:364] duration metric: took 53.679µs to acquireMachinesLock for "no-preload-818382"
	I0717 01:37:26.102845   69161 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:37:26.102852   69161 fix.go:54] fixHost starting: 
	I0717 01:37:26.103150   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:37:26.103173   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:37:26.119906   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33241
	I0717 01:37:26.120394   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:37:26.120930   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:37:26.120952   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:37:26.121328   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:37:26.121541   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:26.121680   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:37:26.123050   69161 fix.go:112] recreateIfNeeded on no-preload-818382: state=Stopped err=<nil>
	I0717 01:37:26.123069   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	W0717 01:37:26.123226   69161 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:37:26.125020   69161 out.go:177] * Restarting existing kvm2 VM for "no-preload-818382" ...
	I0717 01:37:26.126273   69161 main.go:141] libmachine: (no-preload-818382) Calling .Start
	I0717 01:37:26.126469   69161 main.go:141] libmachine: (no-preload-818382) Ensuring networks are active...
	I0717 01:37:26.127225   69161 main.go:141] libmachine: (no-preload-818382) Ensuring network default is active
	I0717 01:37:26.127552   69161 main.go:141] libmachine: (no-preload-818382) Ensuring network mk-no-preload-818382 is active
	I0717 01:37:26.127899   69161 main.go:141] libmachine: (no-preload-818382) Getting domain xml...
	I0717 01:37:26.128571   69161 main.go:141] libmachine: (no-preload-818382) Creating domain...
	I0717 01:37:27.345119   69161 main.go:141] libmachine: (no-preload-818382) Waiting to get IP...
	I0717 01:37:27.346205   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.346716   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.346764   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.346681   70303 retry.go:31] will retry after 199.66464ms: waiting for machine to come up
	I0717 01:37:27.548206   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.548848   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.548873   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.548815   70303 retry.go:31] will retry after 280.929524ms: waiting for machine to come up
	I0717 01:37:27.831501   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.831934   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.831964   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.831916   70303 retry.go:31] will retry after 301.466781ms: waiting for machine to come up
	I0717 01:37:28.135465   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:28.135945   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:28.135981   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:28.135907   70303 retry.go:31] will retry after 393.103911ms: waiting for machine to come up
	I0717 01:37:28.530344   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:28.530791   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:28.530815   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:28.530761   70303 retry.go:31] will retry after 518.699896ms: waiting for machine to come up
	I0717 01:37:29.051266   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:29.051722   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:29.051763   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:29.051702   70303 retry.go:31] will retry after 618.253779ms: waiting for machine to come up
	I0717 01:37:29.671578   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:29.672083   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:29.672111   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:29.672032   70303 retry.go:31] will retry after 718.051367ms: waiting for machine to come up
	I0717 01:37:30.391904   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:30.392339   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:30.392367   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:30.392290   70303 retry.go:31] will retry after 1.040644293s: waiting for machine to come up
	I0717 01:37:31.434846   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:31.435419   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:31.435467   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:31.435401   70303 retry.go:31] will retry after 1.802022391s: waiting for machine to come up
	I0717 01:37:33.238798   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:33.239381   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:33.239409   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:33.239333   70303 retry.go:31] will retry after 1.417897015s: waiting for machine to come up
	I0717 01:37:34.658523   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:34.659018   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:34.659046   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:34.658971   70303 retry.go:31] will retry after 2.736057609s: waiting for machine to come up
	I0717 01:37:37.396582   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:37.397249   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:37.397279   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:37.397179   70303 retry.go:31] will retry after 2.2175965s: waiting for machine to come up
	I0717 01:37:39.616404   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:39.616819   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:39.616852   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:39.616775   70303 retry.go:31] will retry after 4.136811081s: waiting for machine to come up
	I0717 01:37:43.754795   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.755339   69161 main.go:141] libmachine: (no-preload-818382) Found IP for machine: 192.168.39.38
	I0717 01:37:43.755364   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has current primary IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.755370   69161 main.go:141] libmachine: (no-preload-818382) Reserving static IP address...
	I0717 01:37:43.755825   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "no-preload-818382", mac: "52:54:00:e4:de:04", ip: "192.168.39.38"} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.755856   69161 main.go:141] libmachine: (no-preload-818382) Reserved static IP address: 192.168.39.38
	I0717 01:37:43.755870   69161 main.go:141] libmachine: (no-preload-818382) DBG | skip adding static IP to network mk-no-preload-818382 - found existing host DHCP lease matching {name: "no-preload-818382", mac: "52:54:00:e4:de:04", ip: "192.168.39.38"}
	I0717 01:37:43.755885   69161 main.go:141] libmachine: (no-preload-818382) DBG | Getting to WaitForSSH function...
	I0717 01:37:43.755893   69161 main.go:141] libmachine: (no-preload-818382) Waiting for SSH to be available...
	I0717 01:37:43.758007   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.758337   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.758366   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.758581   69161 main.go:141] libmachine: (no-preload-818382) DBG | Using SSH client type: external
	I0717 01:37:43.758615   69161 main.go:141] libmachine: (no-preload-818382) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa (-rw-------)
	I0717 01:37:43.758640   69161 main.go:141] libmachine: (no-preload-818382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:37:43.758650   69161 main.go:141] libmachine: (no-preload-818382) DBG | About to run SSH command:
	I0717 01:37:43.758662   69161 main.go:141] libmachine: (no-preload-818382) DBG | exit 0
	I0717 01:37:43.884574   69161 main.go:141] libmachine: (no-preload-818382) DBG | SSH cmd err, output: <nil>: 
	I0717 01:37:43.884894   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetConfigRaw
	I0717 01:37:43.885637   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:43.888140   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.888641   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.888673   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.888992   69161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:37:43.889212   69161 machine.go:94] provisionDockerMachine start ...
	I0717 01:37:43.889237   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:43.889449   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:43.892095   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.892409   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.892451   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.892636   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:43.892814   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:43.892978   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:43.893129   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:43.893272   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:43.893472   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:43.893487   69161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:37:44.004698   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:37:44.004726   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.005009   69161 buildroot.go:166] provisioning hostname "no-preload-818382"
	I0717 01:37:44.005035   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.005206   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.008187   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.008700   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.008726   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.008920   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.009094   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.009286   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.009441   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.009612   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.009770   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.009781   69161 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-818382 && echo "no-preload-818382" | sudo tee /etc/hostname
	I0717 01:37:44.136253   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-818382
	
	I0717 01:37:44.136281   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.138973   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.139255   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.139284   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.139469   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.139643   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.139828   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.140012   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.140288   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.140479   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.140504   69161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-818382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-818382/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-818382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:37:44.266505   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:37:44.266534   69161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:37:44.266551   69161 buildroot.go:174] setting up certificates
	I0717 01:37:44.266562   69161 provision.go:84] configureAuth start
	I0717 01:37:44.266580   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.266878   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:44.269798   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.270235   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.270268   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.270404   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.272533   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.272880   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.272907   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.273042   69161 provision.go:143] copyHostCerts
	I0717 01:37:44.273125   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:37:44.273144   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:37:44.273206   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:37:44.273316   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:37:44.273326   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:37:44.273351   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:37:44.273410   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:37:44.273414   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:37:44.273433   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:37:44.273487   69161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.no-preload-818382 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-818382]
	I0717 01:37:44.479434   69161 provision.go:177] copyRemoteCerts
	I0717 01:37:44.479494   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:37:44.479540   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.482477   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.482908   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.482946   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.483128   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.483327   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.483455   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.483580   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:44.571236   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:37:44.596972   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:37:44.621104   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:37:44.643869   69161 provision.go:87] duration metric: took 377.294141ms to configureAuth
	I0717 01:37:44.643898   69161 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:37:44.644105   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:37:44.644180   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.646792   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.647149   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.647179   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.647336   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.647539   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.647675   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.647780   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.647927   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.648096   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.648110   69161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:37:44.939532   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:37:44.939559   69161 machine.go:97] duration metric: took 1.050331351s to provisionDockerMachine
	I0717 01:37:44.939571   69161 start.go:293] postStartSetup for "no-preload-818382" (driver="kvm2")
	I0717 01:37:44.939587   69161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:37:44.939631   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:44.940024   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:37:44.940056   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.942783   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.943199   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.943225   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.943340   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.943504   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.943643   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.943806   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.027519   69161 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:37:45.031577   69161 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:37:45.031599   69161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:37:45.031667   69161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:37:45.031760   69161 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:37:45.031877   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:37:45.041021   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:37:45.064965   69161 start.go:296] duration metric: took 125.382388ms for postStartSetup
	I0717 01:37:45.064998   69161 fix.go:56] duration metric: took 18.96214661s for fixHost
	I0717 01:37:45.065016   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.067787   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.068183   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.068217   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.068340   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.068582   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.068751   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.068904   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.069063   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:45.069226   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:45.069239   69161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:37:45.181490   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180265.155979386
	
	I0717 01:37:45.181513   69161 fix.go:216] guest clock: 1721180265.155979386
	I0717 01:37:45.181522   69161 fix.go:229] Guest: 2024-07-17 01:37:45.155979386 +0000 UTC Remote: 2024-07-17 01:37:45.065002166 +0000 UTC m=+301.553951222 (delta=90.97722ms)
	I0717 01:37:45.181546   69161 fix.go:200] guest clock delta is within tolerance: 90.97722ms
	I0717 01:37:45.181551   69161 start.go:83] releasing machines lock for "no-preload-818382", held for 19.07872127s
	I0717 01:37:45.181570   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.181832   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:45.184836   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.185246   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.185273   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.185420   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.185969   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.186161   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.186303   69161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:37:45.186354   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.186440   69161 ssh_runner.go:195] Run: cat /version.json
	I0717 01:37:45.186464   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.189106   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189351   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189501   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.189548   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189674   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.189876   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.189883   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.189910   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189957   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.190062   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.190122   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.190251   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.190283   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.190505   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.273517   69161 ssh_runner.go:195] Run: systemctl --version
	I0717 01:37:45.297810   69161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:37:45.444285   69161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:37:45.450949   69161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:37:45.451015   69161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:37:45.469442   69161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:37:45.469470   69161 start.go:495] detecting cgroup driver to use...
	I0717 01:37:45.469534   69161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:37:45.488907   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:37:45.503268   69161 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:37:45.503336   69161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:37:45.516933   69161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:37:45.530525   69161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:37:45.642175   69161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:37:45.802107   69161 docker.go:233] disabling docker service ...
	I0717 01:37:45.802170   69161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:37:45.815967   69161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:37:45.827961   69161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:37:45.948333   69161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:37:46.066388   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:37:46.081332   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:37:46.102124   69161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:37:46.102209   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.113289   69161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:37:46.113361   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.123902   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.133825   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.143399   69161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:37:46.153336   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.163110   69161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.179869   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.190114   69161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:37:46.199740   69161 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:37:46.199791   69161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:37:46.212405   69161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:37:46.223444   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:37:46.337353   69161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:37:46.486553   69161 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:37:46.486616   69161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:37:46.491747   69161 start.go:563] Will wait 60s for crictl version
	I0717 01:37:46.491820   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.495749   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:37:46.537334   69161 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:37:46.537418   69161 ssh_runner.go:195] Run: crio --version
	I0717 01:37:46.566918   69161 ssh_runner.go:195] Run: crio --version
	I0717 01:37:46.598762   69161 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:37:46.600041   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:46.602939   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:46.603358   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:46.603387   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:46.603645   69161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:37:46.607975   69161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:37:46.621718   69161 kubeadm.go:883] updating cluster {Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:37:46.621869   69161 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:37:46.621921   69161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:37:46.657321   69161 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:37:46.657346   69161 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:37:46.657389   69161 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.657417   69161 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.657446   69161 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:37:46.657480   69161 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.657596   69161 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.657645   69161 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.657653   69161 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.657733   69161 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.659108   69161 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:37:46.659120   69161 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.659172   69161 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.659109   69161 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.659171   69161 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.659209   69161 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.659210   69161 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.659110   69161 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.818816   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.824725   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.825088   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.825902   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.830336   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.842814   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:37:46.876989   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.906964   69161 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:37:46.907012   69161 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.907060   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.953522   69161 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:37:46.953572   69161 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.953624   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.985236   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.990623   69161 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:37:46.990667   69161 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.990715   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.000280   69161 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:37:47.000313   69161 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:47.000354   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.009927   69161 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:37:47.009976   69161 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:47.010045   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.124625   69161 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:37:47.124677   69161 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:47.124706   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:47.124718   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.124805   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:47.124853   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:47.124877   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:47.124906   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:47.124804   69161 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:37:47.124949   69161 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:47.124983   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.231159   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:47.231201   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:37:47.231217   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:47.231243   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:47.231263   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:47.231302   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.231349   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:47.231414   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:47.231570   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:47.231431   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:47.231464   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:37:47.231715   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:47.279220   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:37:47.279239   69161 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.279286   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.293132   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:37:47.293233   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293243   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:37:47.293309   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:37:47.293313   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293338   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293480   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:37:47.293582   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:37:51.052908   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.773599434s)
	I0717 01:37:51.052941   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:37:51.052963   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:51.052960   69161 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.759674708s)
	I0717 01:37:51.052994   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:37:51.053016   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:51.053020   69161 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.75941775s)
	I0717 01:37:51.053050   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:37:52.809764   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.756726059s)
	I0717 01:37:52.809790   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:37:52.809818   69161 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:52.809884   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:54.565189   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.755280201s)
	I0717 01:37:54.565217   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:37:54.565251   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:54.565341   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:56.720406   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.155036511s)
	I0717 01:37:56.720439   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:37:56.720473   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:56.720538   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:58.168141   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447572914s)
	I0717 01:37:58.168181   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:37:58.168216   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:37:58.168278   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:38:00.033559   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.865254148s)
	I0717 01:38:00.033590   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:38:00.033619   69161 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:38:00.033680   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:38:00.885074   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:38:00.885123   69161 cache_images.go:123] Successfully loaded all cached images
	I0717 01:38:00.885131   69161 cache_images.go:92] duration metric: took 14.22776998s to LoadCachedImages
	I0717 01:38:00.885149   69161 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:38:00.885276   69161 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-818382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:38:00.885360   69161 ssh_runner.go:195] Run: crio config
	I0717 01:38:00.935613   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:38:00.935637   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:00.935649   69161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:38:00.935674   69161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-818382 NodeName:no-preload-818382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:38:00.935799   69161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-818382"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:38:00.935866   69161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:38:00.946897   69161 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:38:00.946982   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:38:00.956493   69161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 01:38:00.974619   69161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:38:00.992580   69161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 01:38:01.009552   69161 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0717 01:38:01.013704   69161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:38:01.026053   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:01.150532   69161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:01.167166   69161 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382 for IP: 192.168.39.38
	I0717 01:38:01.167196   69161 certs.go:194] generating shared ca certs ...
	I0717 01:38:01.167219   69161 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:01.167398   69161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:38:01.167485   69161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:38:01.167504   69161 certs.go:256] generating profile certs ...
	I0717 01:38:01.167622   69161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/client.key
	I0717 01:38:01.167740   69161 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.key.0a44641a
	I0717 01:38:01.167811   69161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.key
	I0717 01:38:01.167996   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:38:01.168037   69161 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:38:01.168049   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:38:01.168094   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:38:01.168137   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:38:01.168176   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:38:01.168241   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:38:01.169161   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:38:01.202385   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:38:01.236910   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:38:01.270000   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:38:01.306655   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:38:01.355634   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:38:01.386958   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:38:01.411202   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:38:01.435949   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:38:01.460843   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:38:01.486827   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:38:01.511874   69161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:38:01.529784   69161 ssh_runner.go:195] Run: openssl version
	I0717 01:38:01.535968   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:38:01.547564   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.552546   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.552611   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.558592   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:38:01.569461   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:38:01.580422   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.585228   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.585276   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.591149   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:38:01.602249   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:38:01.614146   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.618807   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.618868   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.624861   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:38:01.635446   69161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:38:01.640287   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:38:01.646102   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:38:01.651967   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:38:01.658169   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:38:01.664359   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:38:01.670597   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:38:01.677288   69161 kubeadm.go:392] StartCluster: {Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:38:01.677378   69161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:38:01.677434   69161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:01.718896   69161 cri.go:89] found id: ""
	I0717 01:38:01.718964   69161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:38:01.730404   69161 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:38:01.730426   69161 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:38:01.730467   69161 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:38:01.742131   69161 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:38:01.743114   69161 kubeconfig.go:125] found "no-preload-818382" server: "https://192.168.39.38:8443"
	I0717 01:38:01.745151   69161 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:38:01.755348   69161 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0717 01:38:01.755379   69161 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:38:01.755393   69161 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:38:01.755441   69161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:01.794585   69161 cri.go:89] found id: ""
	I0717 01:38:01.794657   69161 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:38:01.811878   69161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:38:01.822275   69161 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:38:01.822297   69161 kubeadm.go:157] found existing configuration files:
	
	I0717 01:38:01.822349   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:38:01.832295   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:38:01.832361   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:38:01.841853   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:38:01.850743   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:38:01.850792   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:38:01.860061   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:38:01.869640   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:38:01.869695   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:38:01.879146   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:38:01.888664   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:38:01.888730   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:38:01.898051   69161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:38:01.907209   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:02.013763   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.064624   69161 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.050830101s)
	I0717 01:38:03.064658   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.281880   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.360185   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.475762   69161 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:38:03.475859   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:03.976869   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:04.476826   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:04.513612   69161 api_server.go:72] duration metric: took 1.03785049s to wait for apiserver process to appear ...
	I0717 01:38:04.513637   69161 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:38:04.513658   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:04.514182   69161 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0717 01:38:05.013987   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:07.606646   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:38:07.606681   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:38:07.606698   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:07.644623   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:38:07.644659   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:38:08.014209   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:08.018649   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:38:08.018675   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:38:08.513802   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:08.523658   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:38:08.523683   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:38:09.013997   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:09.018582   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0717 01:38:09.025524   69161 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:38:09.025556   69161 api_server.go:131] duration metric: took 4.511910476s to wait for apiserver health ...
	I0717 01:38:09.025567   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:38:09.025576   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:09.026854   69161 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:38:09.028050   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:38:09.054928   69161 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:38:09.099807   69161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:38:09.110763   69161 system_pods.go:59] 8 kube-system pods found
	I0717 01:38:09.110804   69161 system_pods.go:61] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:38:09.110817   69161 system_pods.go:61] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:38:09.110827   69161 system_pods.go:61] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:38:09.110835   69161 system_pods.go:61] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:38:09.110843   69161 system_pods.go:61] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:38:09.110852   69161 system_pods.go:61] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:38:09.110862   69161 system_pods.go:61] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:38:09.110872   69161 system_pods.go:61] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:38:09.110881   69161 system_pods.go:74] duration metric: took 11.048265ms to wait for pod list to return data ...
	I0717 01:38:09.110895   69161 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:38:09.115164   69161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:38:09.115185   69161 node_conditions.go:123] node cpu capacity is 2
	I0717 01:38:09.115195   69161 node_conditions.go:105] duration metric: took 4.295793ms to run NodePressure ...
	I0717 01:38:09.115222   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:09.380448   69161 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:38:09.385062   69161 kubeadm.go:739] kubelet initialised
	I0717 01:38:09.385081   69161 kubeadm.go:740] duration metric: took 4.609373ms waiting for restarted kubelet to initialise ...
	I0717 01:38:09.385089   69161 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:09.390128   69161 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.395089   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.395114   69161 pod_ready.go:81] duration metric: took 4.964286ms for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.395122   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.395130   69161 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.400466   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "etcd-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.400485   69161 pod_ready.go:81] duration metric: took 5.34752ms for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.400494   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "etcd-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.400502   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.406059   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-apiserver-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.406079   69161 pod_ready.go:81] duration metric: took 5.569824ms for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.406087   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-apiserver-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.406094   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.508478   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.508503   69161 pod_ready.go:81] duration metric: took 102.401908ms for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.508513   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.508521   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.903484   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-proxy-7xjgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.903507   69161 pod_ready.go:81] duration metric: took 394.977533ms for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.903516   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-proxy-7xjgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.903522   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:10.303374   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-scheduler-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.303400   69161 pod_ready.go:81] duration metric: took 399.87153ms for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:10.303410   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-scheduler-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.303417   69161 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:10.703844   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.703872   69161 pod_ready.go:81] duration metric: took 400.446731ms for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:10.703882   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.703890   69161 pod_ready.go:38] duration metric: took 1.31879349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:10.703906   69161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:38:10.716314   69161 ops.go:34] apiserver oom_adj: -16
	I0717 01:38:10.716330   69161 kubeadm.go:597] duration metric: took 8.985898425s to restartPrimaryControlPlane
	I0717 01:38:10.716338   69161 kubeadm.go:394] duration metric: took 9.0390568s to StartCluster
	I0717 01:38:10.716357   69161 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.716443   69161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:38:10.718239   69161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.718467   69161 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:38:10.718525   69161 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:38:10.718599   69161 addons.go:69] Setting storage-provisioner=true in profile "no-preload-818382"
	I0717 01:38:10.718615   69161 addons.go:69] Setting default-storageclass=true in profile "no-preload-818382"
	I0717 01:38:10.718632   69161 addons.go:234] Setting addon storage-provisioner=true in "no-preload-818382"
	W0717 01:38:10.718641   69161 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:38:10.718657   69161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-818382"
	I0717 01:38:10.718648   69161 addons.go:69] Setting metrics-server=true in profile "no-preload-818382"
	I0717 01:38:10.718669   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.718684   69161 addons.go:234] Setting addon metrics-server=true in "no-preload-818382"
	W0717 01:38:10.718694   69161 addons.go:243] addon metrics-server should already be in state true
	I0717 01:38:10.718710   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:38:10.718720   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.718995   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719013   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719033   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.719036   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719037   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.719062   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.720225   69161 out.go:177] * Verifying Kubernetes components...
	I0717 01:38:10.721645   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:10.735669   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0717 01:38:10.735668   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
	I0717 01:38:10.736213   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.736224   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.736697   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.736712   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.736749   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.736761   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.737065   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.737104   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.737517   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0717 01:38:10.737604   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.737623   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.737632   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.737643   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.737988   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.738548   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.738575   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.738916   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.739154   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.742601   69161 addons.go:234] Setting addon default-storageclass=true in "no-preload-818382"
	W0717 01:38:10.742621   69161 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:38:10.742649   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.742978   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.743000   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.753050   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I0717 01:38:10.761069   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.761760   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.761778   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.762198   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.762374   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.764056   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.766144   69161 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:38:10.767506   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:38:10.767527   69161 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:38:10.767546   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.770625   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.771141   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.771169   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.771354   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.771538   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.771797   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.771964   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.777232   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0717 01:38:10.777667   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.778207   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.778234   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.778629   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.778820   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.780129   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43699
	I0717 01:38:10.780526   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.780732   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.781258   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.781283   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.781642   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.782089   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.782134   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.782214   69161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:38:10.783466   69161 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:38:10.783484   69161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:38:10.783501   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.786557   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.786985   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.787006   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.787233   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.787393   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.787514   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.787610   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.798054   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0717 01:38:10.798498   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.798922   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.798942   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.799281   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.799452   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.801194   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.801413   69161 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:38:10.801428   69161 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:38:10.801444   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.804551   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.804963   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.804988   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.805103   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.805413   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.805564   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.805712   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.941843   69161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:10.962485   69161 node_ready.go:35] waiting up to 6m0s for node "no-preload-818382" to be "Ready" ...
	I0717 01:38:11.029564   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:38:11.047993   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:38:11.180628   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:38:11.180648   69161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:38:11.254864   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:38:11.254891   69161 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:38:11.322266   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:38:11.322290   69161 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:38:11.386819   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:38:12.107148   69161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.059119392s)
	I0717 01:38:12.107209   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107223   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107351   69161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077746478s)
	I0717 01:38:12.107396   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107407   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107523   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107542   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.107553   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107562   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107751   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.107766   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107780   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.107789   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107793   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.107798   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107824   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107831   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.108023   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.108056   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.108064   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.120981   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.121012   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.121920   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.121942   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.121958   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.192883   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.192908   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.193311   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.193357   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.193369   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.193378   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.193389   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.193656   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.193695   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.193704   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.193720   69161 addons.go:475] Verifying addon metrics-server=true in "no-preload-818382"
	I0717 01:38:12.196085   69161 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:38:12.197195   69161 addons.go:510] duration metric: took 1.478669603s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:38:12.968419   69161 node_ready.go:53] node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:15.466641   69161 node_ready.go:53] node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:17.966396   69161 node_ready.go:49] node "no-preload-818382" has status "Ready":"True"
	I0717 01:38:17.966419   69161 node_ready.go:38] duration metric: took 7.003900387s for node "no-preload-818382" to be "Ready" ...
	I0717 01:38:17.966428   69161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:17.972276   69161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:17.979661   69161 pod_ready.go:92] pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:17.979686   69161 pod_ready.go:81] duration metric: took 7.383414ms for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:17.979700   69161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	
	
	==> CRI-O <==
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.329653668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180300329622508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe4925ce-d8e7-4906-867c-93006e4ec232 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.330361905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1005c61-9a32-44d8-8f11-ef276e3d45c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.330432091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1005c61-9a32-44d8-8f11-ef276e3d45c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.330478108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c1005c61-9a32-44d8-8f11-ef276e3d45c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.366581752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f7a26dd-674a-4453-b805-515b29493624 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.366690400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f7a26dd-674a-4453-b805-515b29493624 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.368633064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b398fac4-bc3d-4af9-a38a-f92df844615e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.369247985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180300369161583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b398fac4-bc3d-4af9-a38a-f92df844615e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.369975307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=135e1dc9-93f9-459b-b65a-6cc8be1e428a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.370044185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=135e1dc9-93f9-459b-b65a-6cc8be1e428a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.370102858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=135e1dc9-93f9-459b-b65a-6cc8be1e428a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.403268391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64915175-c051-4caa-929d-7cabdc092c76 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.403352921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64915175-c051-4caa-929d-7cabdc092c76 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.404775920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3e31e2e-a258-4b4d-a76e-881f55568e14 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.405187777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180300405164932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3e31e2e-a258-4b4d-a76e-881f55568e14 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.406039304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=884ac33a-9c5f-41a1-84c8-75f254d8c9fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.406111645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=884ac33a-9c5f-41a1-84c8-75f254d8c9fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.406148906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=884ac33a-9c5f-41a1-84c8-75f254d8c9fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.437134277Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98ac5d51-0195-4569-a341-47617f757251 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.437267996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98ac5d51-0195-4569-a341-47617f757251 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.438397558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05ba6849-27a5-43ff-91a3-22b92f749c29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.438917769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180300438889115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05ba6849-27a5-43ff-91a3-22b92f749c29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.439402662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=738da093-b2aa-4d5b-9688-6323971a785b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.439457422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=738da093-b2aa-4d5b-9688-6323971a785b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:38:20 old-k8s-version-249342 crio[653]: time="2024-07-17 01:38:20.439493584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=738da093-b2aa-4d5b-9688-6323971a785b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 01:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053856] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.738175] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul17 01:21] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586475] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.258109] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060071] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055484] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.214160] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.115956] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.256032] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +6.048119] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.063005] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.184126] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[ +10.091636] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 01:25] systemd-fstab-generator[5033]: Ignoring "noauto" option for root device
	[Jul17 01:27] systemd-fstab-generator[5317]: Ignoring "noauto" option for root device
	[  +0.061098] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:38:20 up 17 min,  0 users,  load average: 0.17, 0.07, 0.03
	Linux old-k8s-version-249342 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002662a0, 0xc00009e0c0)
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]: goroutine 155 [syscall]:
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]: syscall.Syscall6(0xe8, 0xd, 0xc0009d9b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x7e6312, 0xc0006dbb68, 0x7e5e18)
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc0009d9b6c, 0x7, 0x7, 0xffffffffffffffff, 0x1000000016c, 0x0, 0x0)
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0008665a0, 0x10900000000, 0x10000000100, 0x1)
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000051720)
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 17 01:38:19 old-k8s-version-249342 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 01:38:19 old-k8s-version-249342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 01:38:19 old-k8s-version-249342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 17 01:38:19 old-k8s-version-249342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 01:38:19 old-k8s-version-249342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6509]: I0717 01:38:19.732123    6509 server.go:416] Version: v1.20.0
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6509]: I0717 01:38:19.732578    6509 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6509]: I0717 01:38:19.739720    6509 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6509]: W0717 01:38:19.742525    6509 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 01:38:19 old-k8s-version-249342 kubelet[6509]: I0717 01:38:19.743253    6509 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 2 (230.255793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-249342" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-818382 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-818382 --alsologtostderr -v=3: exit status 82 (2m0.488162996s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-818382"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:30:12.214924   68504 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:30:12.215045   68504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:30:12.215056   68504 out.go:304] Setting ErrFile to fd 2...
	I0717 01:30:12.215061   68504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:30:12.215261   68504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:30:12.215487   68504 out.go:298] Setting JSON to false
	I0717 01:30:12.215559   68504 mustload.go:65] Loading cluster: no-preload-818382
	I0717 01:30:12.215945   68504 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:30:12.216011   68504 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:30:12.216172   68504 mustload.go:65] Loading cluster: no-preload-818382
	I0717 01:30:12.216267   68504 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:30:12.216289   68504 stop.go:39] StopHost: no-preload-818382
	I0717 01:30:12.216680   68504 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:30:12.216718   68504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:30:12.231450   68504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45331
	I0717 01:30:12.231931   68504 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:30:12.232437   68504 main.go:141] libmachine: Using API Version  1
	I0717 01:30:12.232458   68504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:30:12.232796   68504 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:30:12.234973   68504 out.go:177] * Stopping node "no-preload-818382"  ...
	I0717 01:30:12.236190   68504 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 01:30:12.236222   68504 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:30:12.236490   68504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 01:30:12.236525   68504 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:30:12.239400   68504 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:30:12.239771   68504 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:29:11 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:30:12.239818   68504 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:30:12.239972   68504 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:30:12.240158   68504 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:30:12.240338   68504 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:30:12.240491   68504 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:30:12.332749   68504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 01:30:12.395052   68504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 01:30:12.451765   68504 main.go:141] libmachine: Stopping "no-preload-818382"...
	I0717 01:30:12.451822   68504 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:30:12.453315   68504 main.go:141] libmachine: (no-preload-818382) Calling .Stop
	I0717 01:30:12.457289   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 0/120
	I0717 01:30:13.458560   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 1/120
	I0717 01:30:14.459909   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 2/120
	I0717 01:30:15.462195   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 3/120
	I0717 01:30:16.463669   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 4/120
	I0717 01:30:17.465586   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 5/120
	I0717 01:30:18.466960   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 6/120
	I0717 01:30:19.468230   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 7/120
	I0717 01:30:20.469726   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 8/120
	I0717 01:30:21.471209   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 9/120
	I0717 01:30:22.473229   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 10/120
	I0717 01:30:23.475057   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 11/120
	I0717 01:30:24.477366   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 12/120
	I0717 01:30:25.478928   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 13/120
	I0717 01:30:26.480437   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 14/120
	I0717 01:30:27.481737   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 15/120
	I0717 01:30:28.483008   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 16/120
	I0717 01:30:29.484336   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 17/120
	I0717 01:30:30.485638   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 18/120
	I0717 01:30:31.488062   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 19/120
	I0717 01:30:32.490035   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 20/120
	I0717 01:30:33.491647   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 21/120
	I0717 01:30:34.492985   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 22/120
	I0717 01:30:35.495182   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 23/120
	I0717 01:30:36.496487   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 24/120
	I0717 01:30:37.498081   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 25/120
	I0717 01:30:38.499468   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 26/120
	I0717 01:30:39.500958   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 27/120
	I0717 01:30:40.502334   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 28/120
	I0717 01:30:41.504047   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 29/120
	I0717 01:30:42.506093   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 30/120
	I0717 01:30:43.507456   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 31/120
	I0717 01:30:44.509487   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 32/120
	I0717 01:30:45.511158   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 33/120
	I0717 01:30:46.512609   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 34/120
	I0717 01:30:47.513960   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 35/120
	I0717 01:30:48.515126   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 36/120
	I0717 01:30:49.517095   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 37/120
	I0717 01:30:50.519061   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 38/120
	I0717 01:30:51.520673   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 39/120
	I0717 01:30:52.522838   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 40/120
	I0717 01:30:53.524089   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 41/120
	I0717 01:30:54.525988   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 42/120
	I0717 01:30:55.527240   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 43/120
	I0717 01:30:56.528398   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 44/120
	I0717 01:30:57.530351   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 45/120
	I0717 01:30:58.531679   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 46/120
	I0717 01:30:59.533674   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 47/120
	I0717 01:31:00.535091   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 48/120
	I0717 01:31:01.536339   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 49/120
	I0717 01:31:02.538584   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 50/120
	I0717 01:31:03.539921   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 51/120
	I0717 01:31:04.541175   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 52/120
	I0717 01:31:05.542535   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 53/120
	I0717 01:31:06.544117   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 54/120
	I0717 01:31:07.546152   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 55/120
	I0717 01:31:08.547799   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 56/120
	I0717 01:31:09.549161   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 57/120
	I0717 01:31:10.550688   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 58/120
	I0717 01:31:11.552166   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 59/120
	I0717 01:31:12.554365   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 60/120
	I0717 01:31:13.556776   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 61/120
	I0717 01:31:14.557961   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 62/120
	I0717 01:31:15.559848   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 63/120
	I0717 01:31:16.561090   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 64/120
	I0717 01:31:17.562557   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 65/120
	I0717 01:31:18.564047   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 66/120
	I0717 01:31:19.565414   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 67/120
	I0717 01:31:20.566899   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 68/120
	I0717 01:31:21.568216   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 69/120
	I0717 01:31:22.570779   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 70/120
	I0717 01:31:23.572378   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 71/120
	I0717 01:31:24.573937   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 72/120
	I0717 01:31:25.575113   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 73/120
	I0717 01:31:26.576479   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 74/120
	I0717 01:31:27.578509   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 75/120
	I0717 01:31:28.579977   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 76/120
	I0717 01:31:29.581554   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 77/120
	I0717 01:31:30.583236   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 78/120
	I0717 01:31:31.585052   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 79/120
	I0717 01:31:32.587161   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 80/120
	I0717 01:31:33.589061   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 81/120
	I0717 01:31:34.590577   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 82/120
	I0717 01:31:35.592603   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 83/120
	I0717 01:31:36.594498   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 84/120
	I0717 01:31:37.596454   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 85/120
	I0717 01:31:38.598697   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 86/120
	I0717 01:31:39.599943   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 87/120
	I0717 01:31:40.601611   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 88/120
	I0717 01:31:41.603126   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 89/120
	I0717 01:31:42.605266   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 90/120
	I0717 01:31:43.607447   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 91/120
	I0717 01:31:44.608846   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 92/120
	I0717 01:31:45.610218   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 93/120
	I0717 01:31:46.611525   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 94/120
	I0717 01:31:47.613419   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 95/120
	I0717 01:31:48.614752   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 96/120
	I0717 01:31:49.616241   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 97/120
	I0717 01:31:50.617648   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 98/120
	I0717 01:31:51.619121   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 99/120
	I0717 01:31:52.620782   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 100/120
	I0717 01:31:53.622289   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 101/120
	I0717 01:31:54.624921   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 102/120
	I0717 01:31:55.626211   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 103/120
	I0717 01:31:56.627652   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 104/120
	I0717 01:31:57.629446   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 105/120
	I0717 01:31:58.631491   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 106/120
	I0717 01:31:59.632938   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 107/120
	I0717 01:32:00.634289   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 108/120
	I0717 01:32:01.636489   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 109/120
	I0717 01:32:02.638588   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 110/120
	I0717 01:32:03.639843   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 111/120
	I0717 01:32:04.641501   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 112/120
	I0717 01:32:05.643006   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 113/120
	I0717 01:32:06.644588   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 114/120
	I0717 01:32:07.646306   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 115/120
	I0717 01:32:08.647543   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 116/120
	I0717 01:32:09.648903   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 117/120
	I0717 01:32:10.650918   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 118/120
	I0717 01:32:11.652694   68504 main.go:141] libmachine: (no-preload-818382) Waiting for machine to stop 119/120
	I0717 01:32:12.654035   68504 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 01:32:12.654096   68504 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 01:32:12.656222   68504 out.go:177] 
	W0717 01:32:12.657525   68504 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 01:32:12.657555   68504 out.go:239] * 
	* 
	W0717 01:32:12.660849   68504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:32:12.662098   68504 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-818382 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382: exit status 3 (18.457285569s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:32:31.120993   68957 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0717 01:32:31.121019   68957 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-818382" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382: exit status 3 (3.171928338s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:32:34.292882   69052 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0717 01:32:34.292906   69052 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-818382 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-818382 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149582839s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-818382 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382: exit status 3 (3.061977207s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:32:43.504924   69132 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0717 01:32:43.504940   69132 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-818382" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-484167 -n embed-certs-484167
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 01:42:36.744187447 +0000 UTC m=+5887.548333279
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-484167 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-484167 logs -n 25: (1.209182326s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-621535                              | stopped-upgrade-621535       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:19 UTC |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729236                           | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-249342             | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:22 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-484167            | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC | 17 Jul 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-945694  | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC | 17 Jul 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-484167                 | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC | 17 Jul 24 01:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-945694       | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:34 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:28 UTC |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:30 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-818382             | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:30 UTC | 17 Jul 24 01:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-818382                                   | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-818382                  | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC | 17 Jul 24 01:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:32:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:32:43.547613   69161 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:32:43.547856   69161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:43.547865   69161 out.go:304] Setting ErrFile to fd 2...
	I0717 01:32:43.547869   69161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:43.548058   69161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:32:43.548591   69161 out.go:298] Setting JSON to false
	I0717 01:32:43.549476   69161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8113,"bootTime":1721171851,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:32:43.549531   69161 start.go:139] virtualization: kvm guest
	I0717 01:32:43.551667   69161 out.go:177] * [no-preload-818382] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:32:43.552978   69161 notify.go:220] Checking for updates...
	I0717 01:32:43.553027   69161 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:32:43.554498   69161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:32:43.555767   69161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:32:43.557080   69161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:32:43.558402   69161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:32:43.559566   69161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:32:43.561137   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:32:43.561542   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.561591   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.576810   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0717 01:32:43.577217   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.577724   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.577746   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.578068   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.578246   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.578474   69161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:32:43.578722   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.578751   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.593634   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0717 01:32:43.594007   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.594435   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.594460   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.594810   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.594984   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.632126   69161 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:32:43.633290   69161 start.go:297] selected driver: kvm2
	I0717 01:32:43.633305   69161 start.go:901] validating driver "kvm2" against &{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:43.633393   69161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:32:43.634018   69161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.634085   69161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:32:43.648838   69161 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:32:43.649342   69161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:32:43.649377   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:32:43.649388   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:32:43.649454   69161 start.go:340] cluster config:
	{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:43.649575   69161 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.651213   69161 out.go:177] * Starting "no-preload-818382" primary control-plane node in "no-preload-818382" cluster
	I0717 01:32:43.652698   69161 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:32:43.652866   69161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:32:43.652971   69161 cache.go:107] acquiring lock: {Name:mk0dda4d4cdd92722b746ab931e6544cfc8daee5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.652980   69161 cache.go:107] acquiring lock: {Name:mk1de3a52aa61e3b4e847379240ac3935bedb199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653004   69161 cache.go:107] acquiring lock: {Name:mkf6e5b69e84ed3f384772a188b9364b7e3d5b5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653072   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 01:32:43.653091   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0717 01:32:43.653102   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0717 01:32:43.653107   69161 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 146.502µs
	I0717 01:32:43.653119   69161 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653117   69161 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 121.37µs
	I0717 01:32:43.653137   69161 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653098   69161 cache.go:107] acquiring lock: {Name:mkf2f11535addf893c2faa84c376231e8d922e64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653127   69161 cache.go:107] acquiring lock: {Name:mk0f717937d10c133c40dfa3d731090d6e186c8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653157   69161 cache.go:107] acquiring lock: {Name:mkddaaee919763be73bfba0c581555b8cc97a67b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653143   69161 cache.go:107] acquiring lock: {Name:mkecaf352dd381368806d2a149fd31f0c349a680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653184   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 exists
	I0717 01:32:43.653170   69161 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:32:43.653201   69161 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0" took 76.404µs
	I0717 01:32:43.653211   69161 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0717 01:32:43.653256   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0717 01:32:43.653259   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0717 01:32:43.653270   69161 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 131.092µs
	I0717 01:32:43.653278   69161 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653278   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0717 01:32:43.653273   69161 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 220.448µs
	I0717 01:32:43.653293   69161 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0717 01:32:43.653292   69161 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 138.342µs
	I0717 01:32:43.653303   69161 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0717 01:32:43.653142   69161 cache.go:107] acquiring lock: {Name:mk2ca5e82f37242a4f02d1776db6559bdb43421e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653316   69161 start.go:364] duration metric: took 84.706µs to acquireMachinesLock for "no-preload-818382"
	I0717 01:32:43.653101   69161 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 132.422µs
	I0717 01:32:43.653358   69161 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:32:43.653360   69161 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 01:32:43.653365   69161 fix.go:54] fixHost starting: 
	I0717 01:32:43.653345   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0717 01:32:43.653380   69161 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 247.182µs
	I0717 01:32:43.653397   69161 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653413   69161 cache.go:87] Successfully saved all images to host disk.
	I0717 01:32:43.653791   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.653851   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.669140   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0717 01:32:43.669544   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.669975   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.669995   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.670285   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.670451   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.670597   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:32:43.672083   69161 fix.go:112] recreateIfNeeded on no-preload-818382: state=Running err=<nil>
	W0717 01:32:43.672118   69161 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:32:43.674037   69161 out.go:177] * Updating the running kvm2 "no-preload-818382" VM ...
	I0717 01:32:40.312635   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:42.810125   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:44.006444   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:46.006933   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:43.675220   69161 machine.go:94] provisionDockerMachine start ...
	I0717 01:32:43.675236   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.675410   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:32:43.677780   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:32:43.678159   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:29:11 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:32:43.678194   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:32:43.678285   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:32:43.678480   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:32:43.678635   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:32:43.678751   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:32:43.678900   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:43.679072   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:32:43.679082   69161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:32:46.576890   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:44.811604   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:47.310107   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:49.310610   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:48.007526   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:50.506280   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:49.648813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:51.310765   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:53.810052   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:53.007282   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:55.506679   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:57.506743   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:55.728954   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:55.810343   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:57.810539   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:00.007367   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.509717   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:58.800813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:59.810958   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.310473   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.804718   66659 pod_ready.go:81] duration metric: took 4m0.000441849s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:02.804758   66659 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 01:33:02.804776   66659 pod_ready.go:38] duration metric: took 4m11.542416864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:02.804800   66659 kubeadm.go:597] duration metric: took 4m19.055059195s to restartPrimaryControlPlane
	W0717 01:33:02.804851   66659 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 01:33:02.804875   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:33:05.008344   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:07.008631   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:04.880862   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:07.956811   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:09.506709   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:12.007454   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:14.007849   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:16.506348   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:17.072888   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:19.005817   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:21.006641   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:20.144862   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:23.007827   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:24.506621   66178 pod_ready.go:81] duration metric: took 4m0.006337956s for pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:24.506648   66178 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:33:24.506656   66178 pod_ready.go:38] duration metric: took 4m4.541684979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:24.506672   66178 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:24.506700   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:24.506752   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:24.553972   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:24.553994   66178 cri.go:89] found id: ""
	I0717 01:33:24.554003   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:24.554067   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.558329   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:24.558382   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:24.593681   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:24.593710   66178 cri.go:89] found id: ""
	I0717 01:33:24.593717   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:24.593764   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.598462   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:24.598521   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:24.638597   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:24.638617   66178 cri.go:89] found id: ""
	I0717 01:33:24.638624   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:24.638674   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.642611   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:24.642674   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:24.678207   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:24.678227   66178 cri.go:89] found id: ""
	I0717 01:33:24.678233   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:24.678284   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.682820   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:24.682884   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:24.724141   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:24.724170   66178 cri.go:89] found id: ""
	I0717 01:33:24.724179   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:24.724231   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.729301   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:24.729355   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:24.765894   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:24.765916   66178 cri.go:89] found id: ""
	I0717 01:33:24.765925   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:24.765970   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.770898   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:24.770951   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:24.805812   66178 cri.go:89] found id: ""
	I0717 01:33:24.805835   66178 logs.go:276] 0 containers: []
	W0717 01:33:24.805842   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:24.805848   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:24.805897   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:24.847766   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:24.847788   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:24.847794   66178 cri.go:89] found id: ""
	I0717 01:33:24.847802   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:24.847852   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.852045   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.856136   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:24.856161   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:24.892801   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:24.892829   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:24.944203   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:24.944236   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:25.482400   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:25.482440   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:25.544150   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:25.544190   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:25.559587   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:25.559620   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:25.679463   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:25.679488   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:25.725117   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:25.725144   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:25.771390   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:25.771417   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:25.818766   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:25.818792   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:25.861973   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:25.862008   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:25.899694   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:25.899723   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:25.937573   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:25.937604   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:26.224800   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:28.476050   66178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:28.491506   66178 api_server.go:72] duration metric: took 4m14.298590069s to wait for apiserver process to appear ...
	I0717 01:33:28.491527   66178 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:28.491568   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:28.491626   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:28.526854   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:28.526882   66178 cri.go:89] found id: ""
	I0717 01:33:28.526891   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:28.526957   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.531219   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:28.531282   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:28.567901   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:28.567927   66178 cri.go:89] found id: ""
	I0717 01:33:28.567937   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:28.567995   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.572030   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:28.572094   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:28.606586   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:28.606610   66178 cri.go:89] found id: ""
	I0717 01:33:28.606622   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:28.606679   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.611494   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:28.611555   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:28.647224   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:28.647247   66178 cri.go:89] found id: ""
	I0717 01:33:28.647255   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:28.647311   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.651314   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:28.651376   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:28.686387   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:28.686412   66178 cri.go:89] found id: ""
	I0717 01:33:28.686420   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:28.686473   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.691061   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:28.691128   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:28.728066   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:28.728091   66178 cri.go:89] found id: ""
	I0717 01:33:28.728099   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:28.728147   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.732397   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:28.732446   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:28.770233   66178 cri.go:89] found id: ""
	I0717 01:33:28.770261   66178 logs.go:276] 0 containers: []
	W0717 01:33:28.770270   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:28.770277   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:28.770338   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:28.806271   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:28.806296   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:28.806302   66178 cri.go:89] found id: ""
	I0717 01:33:28.806311   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:28.806371   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.810691   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.814958   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:28.814976   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:28.856685   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:28.856712   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:28.897748   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:28.897790   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:28.958202   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:28.958228   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:28.999474   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:28.999501   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:29.035726   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:29.035758   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:29.072498   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:29.072524   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:29.110199   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:29.110226   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:29.144474   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:29.144506   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:29.196286   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:29.196315   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:29.210251   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:29.210274   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:29.313845   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:29.313877   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:29.748683   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:29.748719   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:32.292005   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:33:32.296375   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 200:
	ok
	I0717 01:33:32.297480   66178 api_server.go:141] control plane version: v1.30.2
	I0717 01:33:32.297499   66178 api_server.go:131] duration metric: took 3.805966225s to wait for apiserver health ...
	I0717 01:33:32.297507   66178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:32.297528   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:32.297569   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:32.336526   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:32.336566   66178 cri.go:89] found id: ""
	I0717 01:33:32.336576   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:32.336629   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.340838   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:32.340904   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:32.375827   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:32.375853   66178 cri.go:89] found id: ""
	I0717 01:33:32.375862   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:32.375920   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.380212   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:32.380269   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:32.417036   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:32.417063   66178 cri.go:89] found id: ""
	I0717 01:33:32.417075   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:32.417140   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.421437   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:32.421507   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:32.455708   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:32.455732   66178 cri.go:89] found id: ""
	I0717 01:33:32.455741   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:32.455799   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.464218   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:32.464299   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:32.506931   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:32.506958   66178 cri.go:89] found id: ""
	I0717 01:33:32.506968   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:32.507030   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.511493   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:32.511562   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:32.554706   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:32.554731   66178 cri.go:89] found id: ""
	I0717 01:33:32.554741   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:32.554806   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.559101   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:32.559175   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:32.598078   66178 cri.go:89] found id: ""
	I0717 01:33:32.598113   66178 logs.go:276] 0 containers: []
	W0717 01:33:32.598126   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:32.598135   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:32.598209   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:29.300812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:34.426424   66659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.621528106s)
	I0717 01:33:34.426506   66659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:34.441446   66659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:33:34.451230   66659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:33:34.460682   66659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:33:34.460702   66659 kubeadm.go:157] found existing configuration files:
	
	I0717 01:33:34.460746   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:33:34.469447   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:33:34.469496   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:33:34.478412   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:33:34.487047   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:33:34.487096   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:33:34.496243   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:33:34.504852   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:33:34.504907   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:33:34.513592   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:33:34.521997   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:33:34.522048   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:33:34.530773   66659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:33:32.639086   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:32.639113   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:32.639119   66178 cri.go:89] found id: ""
	I0717 01:33:32.639127   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:32.639185   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.643404   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.648144   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:32.648165   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:32.700179   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:32.700212   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:33.091798   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:33.091840   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:33.142057   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:33.142095   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:33.197532   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:33.197567   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:33.248356   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:33.248393   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:33.290624   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:33.290652   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:33.338525   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:33.338557   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:33.379963   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:33.379998   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:33.393448   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:33.393472   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:33.497330   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:33.497366   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:33.534015   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:33.534048   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:33.569753   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:33.569779   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:36.112668   66178 system_pods.go:59] 8 kube-system pods found
	I0717 01:33:36.112698   66178 system_pods.go:61] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running
	I0717 01:33:36.112704   66178 system_pods.go:61] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running
	I0717 01:33:36.112710   66178 system_pods.go:61] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running
	I0717 01:33:36.112716   66178 system_pods.go:61] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running
	I0717 01:33:36.112721   66178 system_pods.go:61] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:33:36.112726   66178 system_pods.go:61] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running
	I0717 01:33:36.112734   66178 system_pods.go:61] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:36.112741   66178 system_pods.go:61] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:33:36.112752   66178 system_pods.go:74] duration metric: took 3.81523968s to wait for pod list to return data ...
	I0717 01:33:36.112760   66178 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:33:36.114860   66178 default_sa.go:45] found service account: "default"
	I0717 01:33:36.114880   66178 default_sa.go:55] duration metric: took 2.115012ms for default service account to be created ...
	I0717 01:33:36.114888   66178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:33:36.119333   66178 system_pods.go:86] 8 kube-system pods found
	I0717 01:33:36.119357   66178 system_pods.go:89] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running
	I0717 01:33:36.119363   66178 system_pods.go:89] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running
	I0717 01:33:36.119368   66178 system_pods.go:89] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running
	I0717 01:33:36.119372   66178 system_pods.go:89] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running
	I0717 01:33:36.119376   66178 system_pods.go:89] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:33:36.119382   66178 system_pods.go:89] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running
	I0717 01:33:36.119392   66178 system_pods.go:89] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:36.119401   66178 system_pods.go:89] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:33:36.119410   66178 system_pods.go:126] duration metric: took 4.516516ms to wait for k8s-apps to be running ...
	I0717 01:33:36.119423   66178 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:33:36.119469   66178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:36.135747   66178 system_svc.go:56] duration metric: took 16.316004ms WaitForService to wait for kubelet
	I0717 01:33:36.135778   66178 kubeadm.go:582] duration metric: took 4m21.94286469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:33:36.135806   66178 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:33:36.140253   66178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:33:36.140274   66178 node_conditions.go:123] node cpu capacity is 2
	I0717 01:33:36.140285   66178 node_conditions.go:105] duration metric: took 4.473888ms to run NodePressure ...
	I0717 01:33:36.140296   66178 start.go:241] waiting for startup goroutines ...
	I0717 01:33:36.140306   66178 start.go:246] waiting for cluster config update ...
	I0717 01:33:36.140326   66178 start.go:255] writing updated cluster config ...
	I0717 01:33:36.140642   66178 ssh_runner.go:195] Run: rm -f paused
	I0717 01:33:36.188858   66178 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:33:36.191016   66178 out.go:177] * Done! kubectl is now configured to use "embed-certs-484167" cluster and "default" namespace by default
	I0717 01:33:35.376822   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:38.448812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:34.720645   66659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:33:43.308866   66659 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:33:43.308943   66659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:33:43.309108   66659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:33:43.309260   66659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:33:43.309392   66659 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:33:43.309485   66659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:33:43.311060   66659 out.go:204]   - Generating certificates and keys ...
	I0717 01:33:43.311120   66659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:33:43.311229   66659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:33:43.311320   66659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:33:43.311396   66659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:33:43.311505   66659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:33:43.311595   66659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:33:43.311682   66659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:33:43.311746   66659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:33:43.311807   66659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:33:43.311893   66659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:33:43.311960   66659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:33:43.312019   66659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:33:43.312083   66659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:33:43.312165   66659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:33:43.312247   66659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:33:43.312337   66659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:33:43.312395   66659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:33:43.312479   66659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:33:43.312534   66659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:33:43.313917   66659 out.go:204]   - Booting up control plane ...
	I0717 01:33:43.313994   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:33:43.314085   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:33:43.314183   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:33:43.314304   66659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:33:43.314415   66659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:33:43.314471   66659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:33:43.314608   66659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:33:43.314728   66659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:33:43.314817   66659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00137795s
	I0717 01:33:43.314955   66659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:33:43.315048   66659 kubeadm.go:310] [api-check] The API server is healthy after 5.002451289s
	I0717 01:33:43.315206   66659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 01:33:43.315310   66659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 01:33:43.315364   66659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 01:33:43.315550   66659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-945694 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 01:33:43.315640   66659 kubeadm.go:310] [bootstrap-token] Using token: eqtrsf.jetqj440l3wkhk98
	I0717 01:33:43.317933   66659 out.go:204]   - Configuring RBAC rules ...
	I0717 01:33:43.318050   66659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 01:33:43.318148   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 01:33:43.318293   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 01:33:43.318405   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 01:33:43.318513   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 01:33:43.318599   66659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 01:33:43.318755   66659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 01:33:43.318831   66659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 01:33:43.318883   66659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 01:33:43.318890   66659 kubeadm.go:310] 
	I0717 01:33:43.318937   66659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 01:33:43.318945   66659 kubeadm.go:310] 
	I0717 01:33:43.319058   66659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 01:33:43.319068   66659 kubeadm.go:310] 
	I0717 01:33:43.319102   66659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 01:33:43.319189   66659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 01:33:43.319251   66659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 01:33:43.319257   66659 kubeadm.go:310] 
	I0717 01:33:43.319333   66659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 01:33:43.319343   66659 kubeadm.go:310] 
	I0717 01:33:43.319407   66659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 01:33:43.319416   66659 kubeadm.go:310] 
	I0717 01:33:43.319485   66659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 01:33:43.319607   66659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 01:33:43.319690   66659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 01:33:43.319698   66659 kubeadm.go:310] 
	I0717 01:33:43.319797   66659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 01:33:43.319910   66659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 01:33:43.319925   66659 kubeadm.go:310] 
	I0717 01:33:43.320045   66659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token eqtrsf.jetqj440l3wkhk98 \
	I0717 01:33:43.320187   66659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 01:33:43.320232   66659 kubeadm.go:310] 	--control-plane 
	I0717 01:33:43.320239   66659 kubeadm.go:310] 
	I0717 01:33:43.320349   66659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 01:33:43.320359   66659 kubeadm.go:310] 
	I0717 01:33:43.320469   66659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token eqtrsf.jetqj440l3wkhk98 \
	I0717 01:33:43.320642   66659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 01:33:43.320672   66659 cni.go:84] Creating CNI manager for ""
	I0717 01:33:43.320685   66659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:33:43.322373   66659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:33:43.323549   66659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:33:43.336069   66659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:33:43.354981   66659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:33:43.355060   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:43.355068   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-945694 minikube.k8s.io/updated_at=2024_07_17T01_33_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=default-k8s-diff-port-945694 minikube.k8s.io/primary=true
	I0717 01:33:43.564470   66659 ops.go:34] apiserver oom_adj: -16
	I0717 01:33:43.564611   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:44.065352   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:44.528766   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:47.604799   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:44.565059   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:45.065658   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:45.565085   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:46.064718   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:46.564689   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:47.064998   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:47.564664   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:48.064694   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:48.565187   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:49.065439   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:49.564950   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:50.065001   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:50.565505   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:51.065369   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:51.564969   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:52.065293   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:52.564953   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:53.065324   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:53.565120   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:54.065189   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:54.565611   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:55.065105   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:55.565494   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.065453   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.565393   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.656280   66659 kubeadm.go:1113] duration metric: took 13.301288619s to wait for elevateKubeSystemPrivileges
	I0717 01:33:56.656319   66659 kubeadm.go:394] duration metric: took 5m12.994113939s to StartCluster
	I0717 01:33:56.656341   66659 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:56.656429   66659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:33:56.658062   66659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:56.658318   66659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:33:56.658384   66659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:33:56.658471   66659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658506   66659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.658516   66659 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:33:56.658514   66659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658545   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.658544   66659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658565   66659 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:33:56.658566   66659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-945694"
	I0717 01:33:56.658590   66659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.658603   66659 addons.go:243] addon metrics-server should already be in state true
	I0717 01:33:56.658631   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.658840   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.658867   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.658941   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.658967   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.658946   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.659047   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.660042   66659 out.go:177] * Verifying Kubernetes components...
	I0717 01:33:56.661365   66659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:33:56.675427   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0717 01:33:56.675919   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.676434   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.676455   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.676887   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.677764   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.677807   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.678856   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44785
	I0717 01:33:56.679033   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0717 01:33:56.679281   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.679550   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.680055   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.680079   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.680153   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.680173   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.680443   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.680523   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.680711   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.681210   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.681252   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.684317   66659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.684338   66659 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:33:56.684362   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.684670   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.684706   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.693393   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0717 01:33:56.693836   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.694292   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.694309   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.694640   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.694801   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.696212   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.698217   66659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:33:56.699432   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:33:56.699455   66659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:33:56.699472   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.700565   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0717 01:33:56.701036   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.701563   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.701578   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.701920   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.702150   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.702903   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.703250   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.703275   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.703457   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.703732   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.703951   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.704282   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.704422   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.705576   66659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:33:56.707192   66659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:56.707207   66659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:33:56.707219   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.707551   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0717 01:33:56.708045   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.708589   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.708611   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.708957   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.709503   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.709545   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.710201   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.710818   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.710854   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.711103   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.711476   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.711751   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.711938   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.724041   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0717 01:33:56.724450   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.724943   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.724965   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.725264   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.725481   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.727357   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.727567   66659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:56.727579   66659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:33:56.727592   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.730575   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.730916   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.730930   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.731147   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.731295   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.731414   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.731558   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.880324   66659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:33:56.907224   66659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-945694" to be "Ready" ...
	I0717 01:33:56.916791   66659 node_ready.go:49] node "default-k8s-diff-port-945694" has status "Ready":"True"
	I0717 01:33:56.916814   66659 node_ready.go:38] duration metric: took 9.553813ms for node "default-k8s-diff-port-945694" to be "Ready" ...
	I0717 01:33:56.916825   66659 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:56.929744   66659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:56.991132   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:57.020549   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:33:57.020582   66659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:33:57.041856   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:57.095649   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:33:57.095672   66659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:33:57.145707   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:33:57.145737   66659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:33:57.220983   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:33:57.569863   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.569888   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.569966   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.569995   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570184   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570210   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.570221   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.570221   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570255   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570230   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570274   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570289   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.570314   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.570325   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570476   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570508   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570514   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.572038   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.572054   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.572095   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.584086   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.584114   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.584383   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.584402   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.951559   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.951583   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.952039   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.952039   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.952055   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.952068   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.952076   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.952317   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.952328   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.952338   66659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-945694"
	I0717 01:33:57.954803   66659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:33:53.680800   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:56.752809   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:57.956002   66659 addons.go:510] duration metric: took 1.29761252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:33:58.936404   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.936430   66659 pod_ready.go:81] duration metric: took 2.006657028s for pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.936440   66659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.940948   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.940968   66659 pod_ready.go:81] duration metric: took 4.522302ms for pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.940976   66659 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.944815   66659 pod_ready.go:92] pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.944830   66659 pod_ready.go:81] duration metric: took 3.847888ms for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.944838   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.949022   66659 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.949039   66659 pod_ready.go:81] duration metric: took 4.196556ms for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.949049   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.953438   66659 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.953456   66659 pod_ready.go:81] duration metric: took 4.401091ms for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.953467   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55xmv" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.335149   66659 pod_ready.go:92] pod "kube-proxy-55xmv" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:59.335174   66659 pod_ready.go:81] duration metric: took 381.700119ms for pod "kube-proxy-55xmv" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.335187   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.734445   66659 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:59.734473   66659 pod_ready.go:81] duration metric: took 399.276861ms for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.734483   66659 pod_ready.go:38] duration metric: took 2.817646454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:59.734499   66659 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:59.734557   66659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:59.750547   66659 api_server.go:72] duration metric: took 3.092197547s to wait for apiserver process to appear ...
	I0717 01:33:59.750573   66659 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:59.750595   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:33:59.755670   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0717 01:33:59.756553   66659 api_server.go:141] control plane version: v1.30.2
	I0717 01:33:59.756591   66659 api_server.go:131] duration metric: took 6.009468ms to wait for apiserver health ...
	I0717 01:33:59.756599   66659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:59.938573   66659 system_pods.go:59] 9 kube-system pods found
	I0717 01:33:59.938605   66659 system_pods.go:61] "coredns-7db6d8ff4d-jbsq5" [0a95f33d-19ef-4b2e-a94e-08bbcaff92dc] Running
	I0717 01:33:59.938611   66659 system_pods.go:61] "coredns-7db6d8ff4d-mqjqg" [ca27ce06-d171-4edd-9a1d-11898283f3ac] Running
	I0717 01:33:59.938615   66659 system_pods.go:61] "etcd-default-k8s-diff-port-945694" [213d53e1-92c9-4b8a-b9ff-6b7f12acd149] Running
	I0717 01:33:59.938618   66659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-945694" [b22e53fb-feec-4684-a672-f9c9b326bc36] Running
	I0717 01:33:59.938622   66659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-945694" [dc840bd9-5087-4642-8e84-8392d188e85f] Running
	I0717 01:33:59.938626   66659 system_pods.go:61] "kube-proxy-55xmv" [ee6913d5-3362-4a9f-a159-1f9b1da7380a] Running
	I0717 01:33:59.938631   66659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-945694" [7bfa8bdb-a9af-4e6b-8a11-f9b6791e2647] Running
	I0717 01:33:59.938640   66659 system_pods.go:61] "metrics-server-569cc877fc-4nffv" [ba214ec1-a180-42ec-847e-80464e102765] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:59.938646   66659 system_pods.go:61] "storage-provisioner" [3352a0de-41db-4537-b87a-24137084aa7a] Running
	I0717 01:33:59.938657   66659 system_pods.go:74] duration metric: took 182.050448ms to wait for pod list to return data ...
	I0717 01:33:59.938669   66659 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:34:00.133695   66659 default_sa.go:45] found service account: "default"
	I0717 01:34:00.133719   66659 default_sa.go:55] duration metric: took 195.042344ms for default service account to be created ...
	I0717 01:34:00.133729   66659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:34:00.338087   66659 system_pods.go:86] 9 kube-system pods found
	I0717 01:34:00.338127   66659 system_pods.go:89] "coredns-7db6d8ff4d-jbsq5" [0a95f33d-19ef-4b2e-a94e-08bbcaff92dc] Running
	I0717 01:34:00.338137   66659 system_pods.go:89] "coredns-7db6d8ff4d-mqjqg" [ca27ce06-d171-4edd-9a1d-11898283f3ac] Running
	I0717 01:34:00.338143   66659 system_pods.go:89] "etcd-default-k8s-diff-port-945694" [213d53e1-92c9-4b8a-b9ff-6b7f12acd149] Running
	I0717 01:34:00.338151   66659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-945694" [b22e53fb-feec-4684-a672-f9c9b326bc36] Running
	I0717 01:34:00.338159   66659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-945694" [dc840bd9-5087-4642-8e84-8392d188e85f] Running
	I0717 01:34:00.338166   66659 system_pods.go:89] "kube-proxy-55xmv" [ee6913d5-3362-4a9f-a159-1f9b1da7380a] Running
	I0717 01:34:00.338173   66659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-945694" [7bfa8bdb-a9af-4e6b-8a11-f9b6791e2647] Running
	I0717 01:34:00.338184   66659 system_pods.go:89] "metrics-server-569cc877fc-4nffv" [ba214ec1-a180-42ec-847e-80464e102765] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:34:00.338196   66659 system_pods.go:89] "storage-provisioner" [3352a0de-41db-4537-b87a-24137084aa7a] Running
	I0717 01:34:00.338205   66659 system_pods.go:126] duration metric: took 204.470489ms to wait for k8s-apps to be running ...
	I0717 01:34:00.338218   66659 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:34:00.338274   66659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:34:00.352151   66659 system_svc.go:56] duration metric: took 13.921542ms WaitForService to wait for kubelet
	I0717 01:34:00.352188   66659 kubeadm.go:582] duration metric: took 3.693843091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:34:00.352213   66659 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:34:00.535457   66659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:34:00.535478   66659 node_conditions.go:123] node cpu capacity is 2
	I0717 01:34:00.535489   66659 node_conditions.go:105] duration metric: took 183.271273ms to run NodePressure ...
	I0717 01:34:00.535500   66659 start.go:241] waiting for startup goroutines ...
	I0717 01:34:00.535506   66659 start.go:246] waiting for cluster config update ...
	I0717 01:34:00.535515   66659 start.go:255] writing updated cluster config ...
	I0717 01:34:00.535731   66659 ssh_runner.go:195] Run: rm -f paused
	I0717 01:34:00.581917   66659 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:34:00.583994   66659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-945694" cluster and "default" namespace by default
	I0717 01:34:02.832840   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:05.904845   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:11.984893   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:15.056813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:21.136802   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:24.208771   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:30.288821   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:33.360818   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:39.440802   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:42.512824   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:48.592870   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:51.668822   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:57.744791   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:00.816890   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:06.896783   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:09.968897   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:16.048887   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:19.120810   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:25.200832   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:28.272897   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:34.352811   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:37.424805   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:43.504775   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:46.576767   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:52.656845   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:55.728841   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:01.808828   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:04.880828   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:10.964781   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:14.032790   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:20.112803   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:23.184780   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:29.264888   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:32.340810   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:38.416815   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:41.488801   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:47.572801   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:50.640840   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:56.720825   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:59.792797   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:05.876784   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:08.944812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:15.024792   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:18.096815   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:21.098660   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:37:21.098691   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:21.098996   69161 buildroot.go:166] provisioning hostname "no-preload-818382"
	I0717 01:37:21.099019   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:21.099239   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:21.100820   69161 machine.go:97] duration metric: took 4m37.425586326s to provisionDockerMachine
	I0717 01:37:21.100856   69161 fix.go:56] duration metric: took 4m37.44749197s for fixHost
	I0717 01:37:21.100862   69161 start.go:83] releasing machines lock for "no-preload-818382", held for 4m37.447517491s
	W0717 01:37:21.100875   69161 start.go:714] error starting host: provision: host is not running
	W0717 01:37:21.100944   69161 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:37:21.100953   69161 start.go:729] Will try again in 5 seconds ...
	I0717 01:37:26.102733   69161 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:37:26.102820   69161 start.go:364] duration metric: took 53.679µs to acquireMachinesLock for "no-preload-818382"
	I0717 01:37:26.102845   69161 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:37:26.102852   69161 fix.go:54] fixHost starting: 
	I0717 01:37:26.103150   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:37:26.103173   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:37:26.119906   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33241
	I0717 01:37:26.120394   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:37:26.120930   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:37:26.120952   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:37:26.121328   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:37:26.121541   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:26.121680   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:37:26.123050   69161 fix.go:112] recreateIfNeeded on no-preload-818382: state=Stopped err=<nil>
	I0717 01:37:26.123069   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	W0717 01:37:26.123226   69161 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:37:26.125020   69161 out.go:177] * Restarting existing kvm2 VM for "no-preload-818382" ...
	I0717 01:37:26.126273   69161 main.go:141] libmachine: (no-preload-818382) Calling .Start
	I0717 01:37:26.126469   69161 main.go:141] libmachine: (no-preload-818382) Ensuring networks are active...
	I0717 01:37:26.127225   69161 main.go:141] libmachine: (no-preload-818382) Ensuring network default is active
	I0717 01:37:26.127552   69161 main.go:141] libmachine: (no-preload-818382) Ensuring network mk-no-preload-818382 is active
	I0717 01:37:26.127899   69161 main.go:141] libmachine: (no-preload-818382) Getting domain xml...
	I0717 01:37:26.128571   69161 main.go:141] libmachine: (no-preload-818382) Creating domain...
	I0717 01:37:27.345119   69161 main.go:141] libmachine: (no-preload-818382) Waiting to get IP...
	I0717 01:37:27.346205   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.346716   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.346764   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.346681   70303 retry.go:31] will retry after 199.66464ms: waiting for machine to come up
	I0717 01:37:27.548206   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.548848   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.548873   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.548815   70303 retry.go:31] will retry after 280.929524ms: waiting for machine to come up
	I0717 01:37:27.831501   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.831934   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.831964   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.831916   70303 retry.go:31] will retry after 301.466781ms: waiting for machine to come up
	I0717 01:37:28.135465   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:28.135945   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:28.135981   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:28.135907   70303 retry.go:31] will retry after 393.103911ms: waiting for machine to come up
	I0717 01:37:28.530344   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:28.530791   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:28.530815   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:28.530761   70303 retry.go:31] will retry after 518.699896ms: waiting for machine to come up
	I0717 01:37:29.051266   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:29.051722   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:29.051763   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:29.051702   70303 retry.go:31] will retry after 618.253779ms: waiting for machine to come up
	I0717 01:37:29.671578   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:29.672083   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:29.672111   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:29.672032   70303 retry.go:31] will retry after 718.051367ms: waiting for machine to come up
	I0717 01:37:30.391904   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:30.392339   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:30.392367   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:30.392290   70303 retry.go:31] will retry after 1.040644293s: waiting for machine to come up
	I0717 01:37:31.434846   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:31.435419   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:31.435467   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:31.435401   70303 retry.go:31] will retry after 1.802022391s: waiting for machine to come up
	I0717 01:37:33.238798   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:33.239381   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:33.239409   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:33.239333   70303 retry.go:31] will retry after 1.417897015s: waiting for machine to come up
	I0717 01:37:34.658523   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:34.659018   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:34.659046   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:34.658971   70303 retry.go:31] will retry after 2.736057609s: waiting for machine to come up
	I0717 01:37:37.396582   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:37.397249   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:37.397279   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:37.397179   70303 retry.go:31] will retry after 2.2175965s: waiting for machine to come up
	I0717 01:37:39.616404   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:39.616819   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:39.616852   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:39.616775   70303 retry.go:31] will retry after 4.136811081s: waiting for machine to come up
	I0717 01:37:43.754795   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.755339   69161 main.go:141] libmachine: (no-preload-818382) Found IP for machine: 192.168.39.38
	I0717 01:37:43.755364   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has current primary IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.755370   69161 main.go:141] libmachine: (no-preload-818382) Reserving static IP address...
	I0717 01:37:43.755825   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "no-preload-818382", mac: "52:54:00:e4:de:04", ip: "192.168.39.38"} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.755856   69161 main.go:141] libmachine: (no-preload-818382) Reserved static IP address: 192.168.39.38
	I0717 01:37:43.755870   69161 main.go:141] libmachine: (no-preload-818382) DBG | skip adding static IP to network mk-no-preload-818382 - found existing host DHCP lease matching {name: "no-preload-818382", mac: "52:54:00:e4:de:04", ip: "192.168.39.38"}
	I0717 01:37:43.755885   69161 main.go:141] libmachine: (no-preload-818382) DBG | Getting to WaitForSSH function...
	I0717 01:37:43.755893   69161 main.go:141] libmachine: (no-preload-818382) Waiting for SSH to be available...
	I0717 01:37:43.758007   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.758337   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.758366   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.758581   69161 main.go:141] libmachine: (no-preload-818382) DBG | Using SSH client type: external
	I0717 01:37:43.758615   69161 main.go:141] libmachine: (no-preload-818382) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa (-rw-------)
	I0717 01:37:43.758640   69161 main.go:141] libmachine: (no-preload-818382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:37:43.758650   69161 main.go:141] libmachine: (no-preload-818382) DBG | About to run SSH command:
	I0717 01:37:43.758662   69161 main.go:141] libmachine: (no-preload-818382) DBG | exit 0
	I0717 01:37:43.884574   69161 main.go:141] libmachine: (no-preload-818382) DBG | SSH cmd err, output: <nil>: 
	I0717 01:37:43.884894   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetConfigRaw
	I0717 01:37:43.885637   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:43.888140   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.888641   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.888673   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.888992   69161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:37:43.889212   69161 machine.go:94] provisionDockerMachine start ...
	I0717 01:37:43.889237   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:43.889449   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:43.892095   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.892409   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.892451   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.892636   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:43.892814   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:43.892978   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:43.893129   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:43.893272   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:43.893472   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:43.893487   69161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:37:44.004698   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:37:44.004726   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.005009   69161 buildroot.go:166] provisioning hostname "no-preload-818382"
	I0717 01:37:44.005035   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.005206   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.008187   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.008700   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.008726   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.008920   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.009094   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.009286   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.009441   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.009612   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.009770   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.009781   69161 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-818382 && echo "no-preload-818382" | sudo tee /etc/hostname
	I0717 01:37:44.136253   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-818382
	
	I0717 01:37:44.136281   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.138973   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.139255   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.139284   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.139469   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.139643   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.139828   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.140012   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.140288   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.140479   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.140504   69161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-818382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-818382/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-818382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:37:44.266505   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:37:44.266534   69161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:37:44.266551   69161 buildroot.go:174] setting up certificates
	I0717 01:37:44.266562   69161 provision.go:84] configureAuth start
	I0717 01:37:44.266580   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.266878   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:44.269798   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.270235   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.270268   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.270404   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.272533   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.272880   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.272907   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.273042   69161 provision.go:143] copyHostCerts
	I0717 01:37:44.273125   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:37:44.273144   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:37:44.273206   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:37:44.273316   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:37:44.273326   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:37:44.273351   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:37:44.273410   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:37:44.273414   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:37:44.273433   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:37:44.273487   69161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.no-preload-818382 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-818382]
	I0717 01:37:44.479434   69161 provision.go:177] copyRemoteCerts
	I0717 01:37:44.479494   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:37:44.479540   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.482477   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.482908   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.482946   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.483128   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.483327   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.483455   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.483580   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:44.571236   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:37:44.596972   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:37:44.621104   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:37:44.643869   69161 provision.go:87] duration metric: took 377.294141ms to configureAuth
	I0717 01:37:44.643898   69161 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:37:44.644105   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:37:44.644180   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.646792   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.647149   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.647179   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.647336   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.647539   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.647675   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.647780   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.647927   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.648096   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.648110   69161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:37:44.939532   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:37:44.939559   69161 machine.go:97] duration metric: took 1.050331351s to provisionDockerMachine
	I0717 01:37:44.939571   69161 start.go:293] postStartSetup for "no-preload-818382" (driver="kvm2")
	I0717 01:37:44.939587   69161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:37:44.939631   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:44.940024   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:37:44.940056   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.942783   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.943199   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.943225   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.943340   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.943504   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.943643   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.943806   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.027519   69161 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:37:45.031577   69161 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:37:45.031599   69161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:37:45.031667   69161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:37:45.031760   69161 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:37:45.031877   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:37:45.041021   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:37:45.064965   69161 start.go:296] duration metric: took 125.382388ms for postStartSetup
	I0717 01:37:45.064998   69161 fix.go:56] duration metric: took 18.96214661s for fixHost
	I0717 01:37:45.065016   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.067787   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.068183   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.068217   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.068340   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.068582   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.068751   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.068904   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.069063   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:45.069226   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:45.069239   69161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:37:45.181490   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180265.155979386
	
	I0717 01:37:45.181513   69161 fix.go:216] guest clock: 1721180265.155979386
	I0717 01:37:45.181522   69161 fix.go:229] Guest: 2024-07-17 01:37:45.155979386 +0000 UTC Remote: 2024-07-17 01:37:45.065002166 +0000 UTC m=+301.553951222 (delta=90.97722ms)
	I0717 01:37:45.181546   69161 fix.go:200] guest clock delta is within tolerance: 90.97722ms
	I0717 01:37:45.181551   69161 start.go:83] releasing machines lock for "no-preload-818382", held for 19.07872127s
	I0717 01:37:45.181570   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.181832   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:45.184836   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.185246   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.185273   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.185420   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.185969   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.186161   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.186303   69161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:37:45.186354   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.186440   69161 ssh_runner.go:195] Run: cat /version.json
	I0717 01:37:45.186464   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.189106   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189351   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189501   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.189548   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189674   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.189876   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.189883   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.189910   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189957   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.190062   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.190122   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.190251   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.190283   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.190505   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.273517   69161 ssh_runner.go:195] Run: systemctl --version
	I0717 01:37:45.297810   69161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:37:45.444285   69161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:37:45.450949   69161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:37:45.451015   69161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:37:45.469442   69161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:37:45.469470   69161 start.go:495] detecting cgroup driver to use...
	I0717 01:37:45.469534   69161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:37:45.488907   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:37:45.503268   69161 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:37:45.503336   69161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:37:45.516933   69161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:37:45.530525   69161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:37:45.642175   69161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:37:45.802107   69161 docker.go:233] disabling docker service ...
	I0717 01:37:45.802170   69161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:37:45.815967   69161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:37:45.827961   69161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:37:45.948333   69161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:37:46.066388   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:37:46.081332   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:37:46.102124   69161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:37:46.102209   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.113289   69161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:37:46.113361   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.123902   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.133825   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.143399   69161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:37:46.153336   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.163110   69161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.179869   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.190114   69161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:37:46.199740   69161 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:37:46.199791   69161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:37:46.212405   69161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:37:46.223444   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:37:46.337353   69161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:37:46.486553   69161 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:37:46.486616   69161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:37:46.491747   69161 start.go:563] Will wait 60s for crictl version
	I0717 01:37:46.491820   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.495749   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:37:46.537334   69161 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:37:46.537418   69161 ssh_runner.go:195] Run: crio --version
	I0717 01:37:46.566918   69161 ssh_runner.go:195] Run: crio --version
	I0717 01:37:46.598762   69161 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:37:46.600041   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:46.602939   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:46.603358   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:46.603387   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:46.603645   69161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:37:46.607975   69161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:37:46.621718   69161 kubeadm.go:883] updating cluster {Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:37:46.621869   69161 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:37:46.621921   69161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:37:46.657321   69161 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:37:46.657346   69161 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:37:46.657389   69161 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.657417   69161 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.657446   69161 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:37:46.657480   69161 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.657596   69161 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.657645   69161 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.657653   69161 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.657733   69161 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.659108   69161 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:37:46.659120   69161 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.659172   69161 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.659109   69161 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.659171   69161 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.659209   69161 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.659210   69161 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.659110   69161 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.818816   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.824725   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.825088   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.825902   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.830336   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.842814   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:37:46.876989   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.906964   69161 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:37:46.907012   69161 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.907060   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.953522   69161 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:37:46.953572   69161 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.953624   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.985236   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.990623   69161 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:37:46.990667   69161 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.990715   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.000280   69161 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:37:47.000313   69161 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:47.000354   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.009927   69161 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:37:47.009976   69161 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:47.010045   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.124625   69161 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:37:47.124677   69161 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:47.124706   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:47.124718   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.124805   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:47.124853   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:47.124877   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:47.124906   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:47.124804   69161 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:37:47.124949   69161 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:47.124983   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.231159   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:47.231201   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:37:47.231217   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:47.231243   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:47.231263   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:47.231302   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.231349   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:47.231414   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:47.231570   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:47.231431   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:47.231464   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:37:47.231715   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:47.279220   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:37:47.279239   69161 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.279286   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.293132   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:37:47.293233   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293243   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:37:47.293309   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:37:47.293313   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293338   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293480   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:37:47.293582   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:37:51.052908   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.773599434s)
	I0717 01:37:51.052941   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:37:51.052963   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:51.052960   69161 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.759674708s)
	I0717 01:37:51.052994   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:37:51.053016   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:51.053020   69161 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.75941775s)
	I0717 01:37:51.053050   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:37:52.809764   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.756726059s)
	I0717 01:37:52.809790   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:37:52.809818   69161 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:52.809884   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:54.565189   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.755280201s)
	I0717 01:37:54.565217   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:37:54.565251   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:54.565341   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:56.720406   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.155036511s)
	I0717 01:37:56.720439   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:37:56.720473   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:56.720538   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:58.168141   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447572914s)
	I0717 01:37:58.168181   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:37:58.168216   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:37:58.168278   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:38:00.033559   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.865254148s)
	I0717 01:38:00.033590   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:38:00.033619   69161 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:38:00.033680   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:38:00.885074   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:38:00.885123   69161 cache_images.go:123] Successfully loaded all cached images
	I0717 01:38:00.885131   69161 cache_images.go:92] duration metric: took 14.22776998s to LoadCachedImages
	I0717 01:38:00.885149   69161 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:38:00.885276   69161 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-818382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:38:00.885360   69161 ssh_runner.go:195] Run: crio config
	I0717 01:38:00.935613   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:38:00.935637   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:00.935649   69161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:38:00.935674   69161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-818382 NodeName:no-preload-818382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:38:00.935799   69161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-818382"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:38:00.935866   69161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:38:00.946897   69161 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:38:00.946982   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:38:00.956493   69161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 01:38:00.974619   69161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:38:00.992580   69161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 01:38:01.009552   69161 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0717 01:38:01.013704   69161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:38:01.026053   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:01.150532   69161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:01.167166   69161 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382 for IP: 192.168.39.38
	I0717 01:38:01.167196   69161 certs.go:194] generating shared ca certs ...
	I0717 01:38:01.167219   69161 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:01.167398   69161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:38:01.167485   69161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:38:01.167504   69161 certs.go:256] generating profile certs ...
	I0717 01:38:01.167622   69161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/client.key
	I0717 01:38:01.167740   69161 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.key.0a44641a
	I0717 01:38:01.167811   69161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.key
	I0717 01:38:01.167996   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:38:01.168037   69161 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:38:01.168049   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:38:01.168094   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:38:01.168137   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:38:01.168176   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:38:01.168241   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:38:01.169161   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:38:01.202385   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:38:01.236910   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:38:01.270000   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:38:01.306655   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:38:01.355634   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:38:01.386958   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:38:01.411202   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:38:01.435949   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:38:01.460843   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:38:01.486827   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:38:01.511874   69161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:38:01.529784   69161 ssh_runner.go:195] Run: openssl version
	I0717 01:38:01.535968   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:38:01.547564   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.552546   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.552611   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.558592   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:38:01.569461   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:38:01.580422   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.585228   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.585276   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.591149   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:38:01.602249   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:38:01.614146   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.618807   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.618868   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.624861   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:38:01.635446   69161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:38:01.640287   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:38:01.646102   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:38:01.651967   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:38:01.658169   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:38:01.664359   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:38:01.670597   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:38:01.677288   69161 kubeadm.go:392] StartCluster: {Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:38:01.677378   69161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:38:01.677434   69161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:01.718896   69161 cri.go:89] found id: ""
	I0717 01:38:01.718964   69161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:38:01.730404   69161 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:38:01.730426   69161 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:38:01.730467   69161 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:38:01.742131   69161 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:38:01.743114   69161 kubeconfig.go:125] found "no-preload-818382" server: "https://192.168.39.38:8443"
	I0717 01:38:01.745151   69161 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:38:01.755348   69161 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0717 01:38:01.755379   69161 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:38:01.755393   69161 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:38:01.755441   69161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:01.794585   69161 cri.go:89] found id: ""
	I0717 01:38:01.794657   69161 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:38:01.811878   69161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:38:01.822275   69161 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:38:01.822297   69161 kubeadm.go:157] found existing configuration files:
	
	I0717 01:38:01.822349   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:38:01.832295   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:38:01.832361   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:38:01.841853   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:38:01.850743   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:38:01.850792   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:38:01.860061   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:38:01.869640   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:38:01.869695   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:38:01.879146   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:38:01.888664   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:38:01.888730   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:38:01.898051   69161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:38:01.907209   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:02.013763   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.064624   69161 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.050830101s)
	I0717 01:38:03.064658   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.281880   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.360185   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.475762   69161 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:38:03.475859   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:03.976869   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:04.476826   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:04.513612   69161 api_server.go:72] duration metric: took 1.03785049s to wait for apiserver process to appear ...
	I0717 01:38:04.513637   69161 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:38:04.513658   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:04.514182   69161 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0717 01:38:05.013987   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:07.606646   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:38:07.606681   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:38:07.606698   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:07.644623   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:38:07.644659   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:38:08.014209   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:08.018649   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:38:08.018675   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:38:08.513802   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:08.523658   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:38:08.523683   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:38:09.013997   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:09.018582   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0717 01:38:09.025524   69161 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:38:09.025556   69161 api_server.go:131] duration metric: took 4.511910476s to wait for apiserver health ...
	I0717 01:38:09.025567   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:38:09.025576   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:09.026854   69161 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:38:09.028050   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:38:09.054928   69161 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:38:09.099807   69161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:38:09.110763   69161 system_pods.go:59] 8 kube-system pods found
	I0717 01:38:09.110804   69161 system_pods.go:61] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:38:09.110817   69161 system_pods.go:61] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:38:09.110827   69161 system_pods.go:61] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:38:09.110835   69161 system_pods.go:61] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:38:09.110843   69161 system_pods.go:61] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:38:09.110852   69161 system_pods.go:61] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:38:09.110862   69161 system_pods.go:61] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:38:09.110872   69161 system_pods.go:61] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:38:09.110881   69161 system_pods.go:74] duration metric: took 11.048265ms to wait for pod list to return data ...
	I0717 01:38:09.110895   69161 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:38:09.115164   69161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:38:09.115185   69161 node_conditions.go:123] node cpu capacity is 2
	I0717 01:38:09.115195   69161 node_conditions.go:105] duration metric: took 4.295793ms to run NodePressure ...
	I0717 01:38:09.115222   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:09.380448   69161 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:38:09.385062   69161 kubeadm.go:739] kubelet initialised
	I0717 01:38:09.385081   69161 kubeadm.go:740] duration metric: took 4.609373ms waiting for restarted kubelet to initialise ...
	I0717 01:38:09.385089   69161 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:09.390128   69161 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.395089   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.395114   69161 pod_ready.go:81] duration metric: took 4.964286ms for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.395122   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.395130   69161 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.400466   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "etcd-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.400485   69161 pod_ready.go:81] duration metric: took 5.34752ms for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.400494   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "etcd-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.400502   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.406059   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-apiserver-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.406079   69161 pod_ready.go:81] duration metric: took 5.569824ms for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.406087   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-apiserver-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.406094   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.508478   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.508503   69161 pod_ready.go:81] duration metric: took 102.401908ms for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.508513   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.508521   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.903484   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-proxy-7xjgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.903507   69161 pod_ready.go:81] duration metric: took 394.977533ms for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.903516   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-proxy-7xjgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.903522   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:10.303374   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-scheduler-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.303400   69161 pod_ready.go:81] duration metric: took 399.87153ms for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:10.303410   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-scheduler-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.303417   69161 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:10.703844   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.703872   69161 pod_ready.go:81] duration metric: took 400.446731ms for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:10.703882   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.703890   69161 pod_ready.go:38] duration metric: took 1.31879349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:10.703906   69161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:38:10.716314   69161 ops.go:34] apiserver oom_adj: -16
	I0717 01:38:10.716330   69161 kubeadm.go:597] duration metric: took 8.985898425s to restartPrimaryControlPlane
	I0717 01:38:10.716338   69161 kubeadm.go:394] duration metric: took 9.0390568s to StartCluster
	I0717 01:38:10.716357   69161 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.716443   69161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:38:10.718239   69161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.718467   69161 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:38:10.718525   69161 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:38:10.718599   69161 addons.go:69] Setting storage-provisioner=true in profile "no-preload-818382"
	I0717 01:38:10.718615   69161 addons.go:69] Setting default-storageclass=true in profile "no-preload-818382"
	I0717 01:38:10.718632   69161 addons.go:234] Setting addon storage-provisioner=true in "no-preload-818382"
	W0717 01:38:10.718641   69161 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:38:10.718657   69161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-818382"
	I0717 01:38:10.718648   69161 addons.go:69] Setting metrics-server=true in profile "no-preload-818382"
	I0717 01:38:10.718669   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.718684   69161 addons.go:234] Setting addon metrics-server=true in "no-preload-818382"
	W0717 01:38:10.718694   69161 addons.go:243] addon metrics-server should already be in state true
	I0717 01:38:10.718710   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:38:10.718720   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.718995   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719013   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719033   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.719036   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719037   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.719062   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.720225   69161 out.go:177] * Verifying Kubernetes components...
	I0717 01:38:10.721645   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:10.735669   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0717 01:38:10.735668   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
	I0717 01:38:10.736213   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.736224   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.736697   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.736712   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.736749   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.736761   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.737065   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.737104   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.737517   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0717 01:38:10.737604   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.737623   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.737632   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.737643   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.737988   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.738548   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.738575   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.738916   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.739154   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.742601   69161 addons.go:234] Setting addon default-storageclass=true in "no-preload-818382"
	W0717 01:38:10.742621   69161 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:38:10.742649   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.742978   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.743000   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.753050   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I0717 01:38:10.761069   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.761760   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.761778   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.762198   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.762374   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.764056   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.766144   69161 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:38:10.767506   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:38:10.767527   69161 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:38:10.767546   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.770625   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.771141   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.771169   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.771354   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.771538   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.771797   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.771964   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.777232   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0717 01:38:10.777667   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.778207   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.778234   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.778629   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.778820   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.780129   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43699
	I0717 01:38:10.780526   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.780732   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.781258   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.781283   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.781642   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.782089   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.782134   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.782214   69161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:38:10.783466   69161 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:38:10.783484   69161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:38:10.783501   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.786557   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.786985   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.787006   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.787233   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.787393   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.787514   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.787610   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.798054   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0717 01:38:10.798498   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.798922   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.798942   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.799281   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.799452   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.801194   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.801413   69161 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:38:10.801428   69161 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:38:10.801444   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.804551   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.804963   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.804988   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.805103   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.805413   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.805564   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.805712   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.941843   69161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:10.962485   69161 node_ready.go:35] waiting up to 6m0s for node "no-preload-818382" to be "Ready" ...
	I0717 01:38:11.029564   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:38:11.047993   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:38:11.180628   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:38:11.180648   69161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:38:11.254864   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:38:11.254891   69161 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:38:11.322266   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:38:11.322290   69161 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:38:11.386819   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:38:12.107148   69161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.059119392s)
	I0717 01:38:12.107209   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107223   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107351   69161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077746478s)
	I0717 01:38:12.107396   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107407   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107523   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107542   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.107553   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107562   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107751   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.107766   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107780   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.107789   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107793   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.107798   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107824   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107831   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.108023   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.108056   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.108064   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.120981   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.121012   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.121920   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.121942   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.121958   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.192883   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.192908   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.193311   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.193357   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.193369   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.193378   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.193389   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.193656   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.193695   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.193704   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.193720   69161 addons.go:475] Verifying addon metrics-server=true in "no-preload-818382"
	I0717 01:38:12.196085   69161 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:38:12.197195   69161 addons.go:510] duration metric: took 1.478669603s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:38:12.968419   69161 node_ready.go:53] node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:15.466641   69161 node_ready.go:53] node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:17.966396   69161 node_ready.go:49] node "no-preload-818382" has status "Ready":"True"
	I0717 01:38:17.966419   69161 node_ready.go:38] duration metric: took 7.003900387s for node "no-preload-818382" to be "Ready" ...
	I0717 01:38:17.966428   69161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:17.972276   69161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:17.979661   69161 pod_ready.go:92] pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:17.979686   69161 pod_ready.go:81] duration metric: took 7.383414ms for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:17.979700   69161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:19.986664   69161 pod_ready.go:102] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:22.486306   69161 pod_ready.go:102] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:23.988340   69161 pod_ready.go:92] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.988366   69161 pod_ready.go:81] duration metric: took 6.008658778s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.988379   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.994341   69161 pod_ready.go:92] pod "kube-apiserver-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.994369   69161 pod_ready.go:81] duration metric: took 5.983444ms for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.994378   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.999839   69161 pod_ready.go:92] pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.999858   69161 pod_ready.go:81] duration metric: took 5.472052ms for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.999870   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.004359   69161 pod_ready.go:92] pod "kube-proxy-7xjgl" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:24.004376   69161 pod_ready.go:81] duration metric: took 4.499078ms for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.004388   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.008711   69161 pod_ready.go:92] pod "kube-scheduler-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:24.008728   69161 pod_ready.go:81] duration metric: took 4.333011ms for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.008738   69161 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:26.015816   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:28.515069   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:30.515823   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:33.015758   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:35.519125   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:38.015328   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:40.015434   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:42.016074   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:44.515165   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:46.515207   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:48.515526   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:51.015352   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:53.524771   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:55.525830   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:58.015294   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:00.016582   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:02.526596   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:05.017331   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:07.522994   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:10.015668   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:12.016581   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:14.514264   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:16.514483   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:18.514912   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:20.516805   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:23.017254   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:25.520744   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:27.525313   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:30.015300   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:32.515768   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:34.516472   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:37.015323   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:39.519189   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:41.519551   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:43.519612   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:46.015845   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:48.514995   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:51.015723   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:53.518041   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:56.016848   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:58.515231   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:01.014815   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:03.016104   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:05.515128   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:08.015053   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:10.515596   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:12.516108   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:15.016422   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:17.516656   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:20.023212   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:22.516829   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:25.015503   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:27.515818   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:29.516308   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:31.516354   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:34.014939   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:36.015491   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:38.515680   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:40.516729   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:43.015702   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:45.016597   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:47.516644   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:50.016083   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:52.016256   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:54.016658   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:56.019466   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:58.517513   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:01.015342   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:03.016255   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:05.017209   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:07.514660   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:09.515175   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:11.515986   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:14.016122   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:16.516248   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:19.016993   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:21.515181   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:23.515448   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:26.016226   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:28.516309   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:31.016068   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:33.516141   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:36.015057   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:38.015141   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:40.015943   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:42.515237   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:44.515403   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:46.516180   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:49.014892   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:51.019533   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:53.514629   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:55.515878   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:57.516813   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:00.016045   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:02.515848   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:05.017085   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:07.515218   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:10.016436   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:12.514412   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:14.515538   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:17.015473   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:19.516189   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:22.015149   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:24.015247   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:24.015279   69161 pod_ready.go:81] duration metric: took 4m0.006532152s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	E0717 01:42:24.015291   69161 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:42:24.015300   69161 pod_ready.go:38] duration metric: took 4m6.048863476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:24.015319   69161 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:42:24.015354   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:24.015412   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:24.070533   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:24.070555   69161 cri.go:89] found id: ""
	I0717 01:42:24.070564   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:24.070624   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.075767   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:24.075844   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:24.118412   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:24.118434   69161 cri.go:89] found id: ""
	I0717 01:42:24.118442   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:24.118491   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.123255   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:24.123323   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:24.159858   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:24.159880   69161 cri.go:89] found id: ""
	I0717 01:42:24.159887   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:24.159938   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.164261   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:24.164333   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:24.201402   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:24.201429   69161 cri.go:89] found id: ""
	I0717 01:42:24.201438   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:24.201490   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.206056   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:24.206112   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:24.241083   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:24.241109   69161 cri.go:89] found id: ""
	I0717 01:42:24.241119   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:24.241177   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.245739   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:24.245794   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:24.284369   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:24.284400   69161 cri.go:89] found id: ""
	I0717 01:42:24.284410   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:24.284473   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.290128   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:24.290184   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:24.328815   69161 cri.go:89] found id: ""
	I0717 01:42:24.328841   69161 logs.go:276] 0 containers: []
	W0717 01:42:24.328848   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:24.328854   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:24.328919   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:24.365591   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:24.365614   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:24.365621   69161 cri.go:89] found id: ""
	I0717 01:42:24.365630   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:24.365690   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.370614   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.375611   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:24.375641   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:24.392837   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:24.392872   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:24.443010   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:24.443036   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:24.482837   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:24.482870   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:24.536236   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:24.536262   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:24.576709   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:24.576740   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:24.625042   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:24.625069   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:24.679911   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:24.679945   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:24.721782   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:24.721809   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:24.775881   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:24.775916   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:24.917773   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:24.917806   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:24.962644   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:24.962673   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:25.002204   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:25.002242   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:28.032243   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:42:28.049580   69161 api_server.go:72] duration metric: took 4m17.331083879s to wait for apiserver process to appear ...
	I0717 01:42:28.049612   69161 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:42:28.049656   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:28.049717   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:28.088496   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:28.088519   69161 cri.go:89] found id: ""
	I0717 01:42:28.088527   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:28.088598   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.092659   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:28.092712   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:28.127205   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:28.127224   69161 cri.go:89] found id: ""
	I0717 01:42:28.127231   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:28.127276   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.131356   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:28.131425   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:28.166535   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:28.166556   69161 cri.go:89] found id: ""
	I0717 01:42:28.166564   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:28.166608   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.170576   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:28.170633   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:28.204842   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:28.204863   69161 cri.go:89] found id: ""
	I0717 01:42:28.204871   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:28.204924   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.208869   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:28.208922   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:28.241397   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:28.241414   69161 cri.go:89] found id: ""
	I0717 01:42:28.241421   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:28.241461   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.245569   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:28.245630   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:28.282072   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:28.282097   69161 cri.go:89] found id: ""
	I0717 01:42:28.282106   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:28.282159   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.286678   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:28.286738   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:28.320229   69161 cri.go:89] found id: ""
	I0717 01:42:28.320255   69161 logs.go:276] 0 containers: []
	W0717 01:42:28.320265   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:28.320271   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:28.320321   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:28.358955   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:28.358979   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:28.358985   69161 cri.go:89] found id: ""
	I0717 01:42:28.358992   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:28.359051   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.363407   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.367862   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:28.367886   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:28.405920   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:28.405948   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:28.442790   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:28.442814   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:28.507947   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:28.507977   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:28.543353   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:28.543375   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:28.591451   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:28.591484   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:29.046193   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:29.046234   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:29.093710   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:29.093743   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:29.132784   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:29.132811   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:29.148146   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:29.148176   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:29.250655   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:29.250682   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:29.295193   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:29.295222   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:29.330372   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:29.330404   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:31.882296   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:42:31.887420   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0717 01:42:31.889130   69161 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:42:31.889151   69161 api_server.go:131] duration metric: took 3.839533176s to wait for apiserver health ...
	I0717 01:42:31.889159   69161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:31.889180   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:31.889231   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:31.932339   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:31.932359   69161 cri.go:89] found id: ""
	I0717 01:42:31.932369   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:31.932428   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:31.936635   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:31.936694   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:31.973771   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:31.973797   69161 cri.go:89] found id: ""
	I0717 01:42:31.973805   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:31.973864   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:31.978328   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:31.978400   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:32.017561   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:32.017589   69161 cri.go:89] found id: ""
	I0717 01:42:32.017598   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:32.017652   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.021983   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:32.022043   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:32.060032   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:32.060058   69161 cri.go:89] found id: ""
	I0717 01:42:32.060067   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:32.060124   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.064390   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:32.064447   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:32.104292   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:32.104314   69161 cri.go:89] found id: ""
	I0717 01:42:32.104322   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:32.104378   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.108874   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:32.108939   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:32.151590   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:32.151611   69161 cri.go:89] found id: ""
	I0717 01:42:32.151619   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:32.151683   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.155683   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:32.155749   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:32.191197   69161 cri.go:89] found id: ""
	I0717 01:42:32.191224   69161 logs.go:276] 0 containers: []
	W0717 01:42:32.191235   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:32.191250   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:32.191315   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:32.228709   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:32.228729   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:32.228734   69161 cri.go:89] found id: ""
	I0717 01:42:32.228741   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:32.228825   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.234032   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.239566   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:32.239588   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:32.254327   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:32.254353   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:32.313682   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:32.313709   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:32.354250   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:32.354278   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:32.404452   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:32.404490   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:32.824059   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:32.824092   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:32.877614   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:32.877645   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:32.987728   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:32.987756   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:33.028146   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:33.028183   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:33.067880   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:33.067907   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:33.106837   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:33.106870   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:33.141500   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:33.141530   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:33.183960   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:33.183991   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:35.738491   69161 system_pods.go:59] 8 kube-system pods found
	I0717 01:42:35.738522   69161 system_pods.go:61] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running
	I0717 01:42:35.738526   69161 system_pods.go:61] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running
	I0717 01:42:35.738531   69161 system_pods.go:61] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running
	I0717 01:42:35.738536   69161 system_pods.go:61] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running
	I0717 01:42:35.738551   69161 system_pods.go:61] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running
	I0717 01:42:35.738558   69161 system_pods.go:61] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running
	I0717 01:42:35.738567   69161 system_pods.go:61] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:42:35.738573   69161 system_pods.go:61] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running
	I0717 01:42:35.738583   69161 system_pods.go:74] duration metric: took 3.849417383s to wait for pod list to return data ...
	I0717 01:42:35.738596   69161 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:42:35.741135   69161 default_sa.go:45] found service account: "default"
	I0717 01:42:35.741154   69161 default_sa.go:55] duration metric: took 2.55225ms for default service account to be created ...
	I0717 01:42:35.741160   69161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:42:35.745925   69161 system_pods.go:86] 8 kube-system pods found
	I0717 01:42:35.745944   69161 system_pods.go:89] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running
	I0717 01:42:35.745949   69161 system_pods.go:89] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running
	I0717 01:42:35.745953   69161 system_pods.go:89] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running
	I0717 01:42:35.745957   69161 system_pods.go:89] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running
	I0717 01:42:35.745961   69161 system_pods.go:89] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running
	I0717 01:42:35.745965   69161 system_pods.go:89] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running
	I0717 01:42:35.745971   69161 system_pods.go:89] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:42:35.745977   69161 system_pods.go:89] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running
	I0717 01:42:35.745986   69161 system_pods.go:126] duration metric: took 4.820763ms to wait for k8s-apps to be running ...
	I0717 01:42:35.745994   69161 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:42:35.746043   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:42:35.763979   69161 system_svc.go:56] duration metric: took 17.975443ms WaitForService to wait for kubelet
	I0717 01:42:35.764007   69161 kubeadm.go:582] duration metric: took 4m25.045517006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:42:35.764027   69161 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:42:35.768267   69161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:42:35.768297   69161 node_conditions.go:123] node cpu capacity is 2
	I0717 01:42:35.768312   69161 node_conditions.go:105] duration metric: took 4.280712ms to run NodePressure ...
	I0717 01:42:35.768337   69161 start.go:241] waiting for startup goroutines ...
	I0717 01:42:35.768347   69161 start.go:246] waiting for cluster config update ...
	I0717 01:42:35.768374   69161 start.go:255] writing updated cluster config ...
	I0717 01:42:35.768681   69161 ssh_runner.go:195] Run: rm -f paused
	I0717 01:42:35.817223   69161 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 01:42:35.819333   69161 out.go:177] * Done! kubectl is now configured to use "no-preload-818382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.379639449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180557379618732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d81d55a-9943-49c9-b10f-2bdccef3850d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.380230474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e693e9fb-d139-44e0-ad64-9716dcbad33d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.380305129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e693e9fb-d139-44e0-ad64-9716dcbad33d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.380678856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179781156710506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d993bb9350f7bfc32762f91918a1cb985ed555ea57afdb3efe52e40c1f37803,PodSandboxId:580c1f98b322514e8dc6af4b464a4e9712a0cef358428b2067f3f95b2a4f8f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721179759162259370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9c5cb46-8df1-450a-9ca7-a686651c1835,},Annotations:map[string]string{io.kubernetes.container.hash: 21f4c01a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187,PodSandboxId:cac67b7d41ea1385a1e0eca5710372b6fd990ff55283adb3fcd616be564f0dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179757918652809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z4qpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43aa103c-9e70-4fb1-8607-321b6904a218,},Annotations:map[string]string{io.kubernetes.container.hash: ed0dfeb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179750371786140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364,PodSandboxId:06e63e0ee89343e4f704f40b041c99eba9560210004538fbeedf4d9f5e899af2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179750367476881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gq7qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9a0ae4-28e0-4900-a39b-f7a0eba7c
c06,},Annotations:map[string]string{io.kubernetes.container.hash: 313309da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c,PodSandboxId:33e11f7db5878fd01048d61d2099a8becdfebc5897f3800ca3f074588f863c13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179745612950992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bec379c140db7a
0ad7e87dd7d54513da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026,PodSandboxId:f61e87c7b0eade411dc2d12c48d596b2b233980e47721e338454c6c50c5cdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179745635815659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca69dd5666621348366299d511
a00935,},Annotations:map[string]string{io.kubernetes.container.hash: 17c2edea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802,PodSandboxId:0d62d3963c8101b674dd20a45d0bb0b34e4a21d3ff09d70b05121745617a8ee9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179745639586318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec50a383234f49917f3a24369567b00,},Ann
otations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c,PodSandboxId:d11db21897316076a25a10d3cfc9c882b128a44c0a1d0ced43e8092e0755fb31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179745613603556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e013499247e47bae51c51faca75cfb,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 638512c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e693e9fb-d139-44e0-ad64-9716dcbad33d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.416141135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7382853a-68cc-4810-8e2d-dc76733daf8e name=/runtime.v1.RuntimeService/Version
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.416275147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7382853a-68cc-4810-8e2d-dc76733daf8e name=/runtime.v1.RuntimeService/Version
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.418413281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=073610ef-457c-44b2-8470-328e0178f412 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.418901353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180557418877911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=073610ef-457c-44b2-8470-328e0178f412 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.419459573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf8a5128-7cf2-4e26-844b-e972c7eb67fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.419568289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf8a5128-7cf2-4e26-844b-e972c7eb67fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.419759553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179781156710506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d993bb9350f7bfc32762f91918a1cb985ed555ea57afdb3efe52e40c1f37803,PodSandboxId:580c1f98b322514e8dc6af4b464a4e9712a0cef358428b2067f3f95b2a4f8f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721179759162259370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9c5cb46-8df1-450a-9ca7-a686651c1835,},Annotations:map[string]string{io.kubernetes.container.hash: 21f4c01a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187,PodSandboxId:cac67b7d41ea1385a1e0eca5710372b6fd990ff55283adb3fcd616be564f0dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179757918652809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z4qpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43aa103c-9e70-4fb1-8607-321b6904a218,},Annotations:map[string]string{io.kubernetes.container.hash: ed0dfeb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179750371786140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364,PodSandboxId:06e63e0ee89343e4f704f40b041c99eba9560210004538fbeedf4d9f5e899af2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179750367476881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gq7qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9a0ae4-28e0-4900-a39b-f7a0eba7c
c06,},Annotations:map[string]string{io.kubernetes.container.hash: 313309da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c,PodSandboxId:33e11f7db5878fd01048d61d2099a8becdfebc5897f3800ca3f074588f863c13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179745612950992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bec379c140db7a
0ad7e87dd7d54513da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026,PodSandboxId:f61e87c7b0eade411dc2d12c48d596b2b233980e47721e338454c6c50c5cdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179745635815659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca69dd5666621348366299d511
a00935,},Annotations:map[string]string{io.kubernetes.container.hash: 17c2edea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802,PodSandboxId:0d62d3963c8101b674dd20a45d0bb0b34e4a21d3ff09d70b05121745617a8ee9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179745639586318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec50a383234f49917f3a24369567b00,},Ann
otations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c,PodSandboxId:d11db21897316076a25a10d3cfc9c882b128a44c0a1d0ced43e8092e0755fb31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179745613603556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e013499247e47bae51c51faca75cfb,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 638512c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf8a5128-7cf2-4e26-844b-e972c7eb67fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.459745190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae268d21-f5e9-4879-b78f-9c456bab6a5e name=/runtime.v1.RuntimeService/Version
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.459839121Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae268d21-f5e9-4879-b78f-9c456bab6a5e name=/runtime.v1.RuntimeService/Version
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.460957526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10edbc1f-812a-4188-b2f7-0d8bfa55451f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.461342005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180557461322965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10edbc1f-812a-4188-b2f7-0d8bfa55451f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.461893693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=852a0634-3d70-46ee-813a-e939188350e2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.461947605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=852a0634-3d70-46ee-813a-e939188350e2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.462133508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179781156710506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d993bb9350f7bfc32762f91918a1cb985ed555ea57afdb3efe52e40c1f37803,PodSandboxId:580c1f98b322514e8dc6af4b464a4e9712a0cef358428b2067f3f95b2a4f8f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721179759162259370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9c5cb46-8df1-450a-9ca7-a686651c1835,},Annotations:map[string]string{io.kubernetes.container.hash: 21f4c01a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187,PodSandboxId:cac67b7d41ea1385a1e0eca5710372b6fd990ff55283adb3fcd616be564f0dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179757918652809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z4qpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43aa103c-9e70-4fb1-8607-321b6904a218,},Annotations:map[string]string{io.kubernetes.container.hash: ed0dfeb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179750371786140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364,PodSandboxId:06e63e0ee89343e4f704f40b041c99eba9560210004538fbeedf4d9f5e899af2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179750367476881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gq7qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9a0ae4-28e0-4900-a39b-f7a0eba7c
c06,},Annotations:map[string]string{io.kubernetes.container.hash: 313309da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c,PodSandboxId:33e11f7db5878fd01048d61d2099a8becdfebc5897f3800ca3f074588f863c13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179745612950992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bec379c140db7a
0ad7e87dd7d54513da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026,PodSandboxId:f61e87c7b0eade411dc2d12c48d596b2b233980e47721e338454c6c50c5cdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179745635815659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca69dd5666621348366299d511
a00935,},Annotations:map[string]string{io.kubernetes.container.hash: 17c2edea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802,PodSandboxId:0d62d3963c8101b674dd20a45d0bb0b34e4a21d3ff09d70b05121745617a8ee9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179745639586318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec50a383234f49917f3a24369567b00,},Ann
otations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c,PodSandboxId:d11db21897316076a25a10d3cfc9c882b128a44c0a1d0ced43e8092e0755fb31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179745613603556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e013499247e47bae51c51faca75cfb,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 638512c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=852a0634-3d70-46ee-813a-e939188350e2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.495400693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba32c34b-5392-4579-9d98-2575cf228eb7 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.495532903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba32c34b-5392-4579-9d98-2575cf228eb7 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.497449648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3d9b1df-4742-44f5-9557-1b1dc99f8637 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.497876691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180557497854276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3d9b1df-4742-44f5-9557-1b1dc99f8637 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.498414869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdcb6888-b509-4ba6-9596-c0c2f349e5e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.498483387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdcb6888-b509-4ba6-9596-c0c2f349e5e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:42:37 embed-certs-484167 crio[722]: time="2024-07-17 01:42:37.499505746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179781156710506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d993bb9350f7bfc32762f91918a1cb985ed555ea57afdb3efe52e40c1f37803,PodSandboxId:580c1f98b322514e8dc6af4b464a4e9712a0cef358428b2067f3f95b2a4f8f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721179759162259370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9c5cb46-8df1-450a-9ca7-a686651c1835,},Annotations:map[string]string{io.kubernetes.container.hash: 21f4c01a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187,PodSandboxId:cac67b7d41ea1385a1e0eca5710372b6fd990ff55283adb3fcd616be564f0dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179757918652809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z4qpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43aa103c-9e70-4fb1-8607-321b6904a218,},Annotations:map[string]string{io.kubernetes.container.hash: ed0dfeb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179750371786140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364,PodSandboxId:06e63e0ee89343e4f704f40b041c99eba9560210004538fbeedf4d9f5e899af2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179750367476881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gq7qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9a0ae4-28e0-4900-a39b-f7a0eba7c
c06,},Annotations:map[string]string{io.kubernetes.container.hash: 313309da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c,PodSandboxId:33e11f7db5878fd01048d61d2099a8becdfebc5897f3800ca3f074588f863c13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179745612950992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bec379c140db7a
0ad7e87dd7d54513da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026,PodSandboxId:f61e87c7b0eade411dc2d12c48d596b2b233980e47721e338454c6c50c5cdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179745635815659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca69dd5666621348366299d511
a00935,},Annotations:map[string]string{io.kubernetes.container.hash: 17c2edea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802,PodSandboxId:0d62d3963c8101b674dd20a45d0bb0b34e4a21d3ff09d70b05121745617a8ee9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179745639586318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec50a383234f49917f3a24369567b00,},Ann
otations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c,PodSandboxId:d11db21897316076a25a10d3cfc9c882b128a44c0a1d0ced43e8092e0755fb31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179745613603556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e013499247e47bae51c51faca75cfb,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 638512c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bdcb6888-b509-4ba6-9596-c0c2f349e5e9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a425272031e79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   2826492fd74f0       storage-provisioner
	7d993bb9350f7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   580c1f98b3225       busybox
	370fe40274893       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   cac67b7d41ea1       coredns-7db6d8ff4d-z4qpz
	dc597519e45ca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2826492fd74f0       storage-provisioner
	2bad298334c16       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                1                   06e63e0ee8934       kube-proxy-gq7qg
	98433f2cdcf43       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            1                   0d62d3963c810       kube-scheduler-embed-certs-484167
	d8d11986de466       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            1                   f61e87c7b0ead       kube-apiserver-embed-certs-484167
	980691b126eee       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   d11db21897316       etcd-embed-certs-484167
	b9c4b4f6e05b2       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   1                   33e11f7db5878       kube-controller-manager-embed-certs-484167
	
	
	==> coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40336 - 4277 "HINFO IN 9002073944448212575.8652882617969124480. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009951171s
	
	
	==> describe nodes <==
	Name:               embed-certs-484167
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-484167
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=embed-certs-484167
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_20_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:20:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-484167
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:42:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:39:51 +0000   Wed, 17 Jul 2024 01:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:39:51 +0000   Wed, 17 Jul 2024 01:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:39:51 +0000   Wed, 17 Jul 2024 01:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:39:51 +0000   Wed, 17 Jul 2024 01:29:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.48
	  Hostname:    embed-certs-484167
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64980e167f3d439991be2dff0b86f1ea
	  System UUID:                64980e16-7f3d-4399-91be-2dff0b86f1ea
	  Boot ID:                    b27debbd-3d14-429b-91ca-a1c60ef2f995
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-z4qpz                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-484167                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-484167             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-484167    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-gq7qg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-484167             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-2qwf6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-484167 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-484167 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-484167 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-484167 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-484167 event: Registered Node embed-certs-484167 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-484167 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-484167 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-484167 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-484167 event: Registered Node embed-certs-484167 in Controller
	
	
	==> dmesg <==
	[Jul17 01:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051101] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.036514] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.258383] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628840] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.559161] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.065219] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060736] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.180674] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.122123] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.305976] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Jul17 01:29] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.069281] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.811311] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +5.618976] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.060211] systemd-fstab-generator[1535]: Ignoring "noauto" option for root device
	[  +1.632889] kauditd_printk_skb: 62 callbacks suppressed
	[  +8.100583] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] <==
	{"level":"info","ts":"2024-07-17T01:29:05.97346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:29:05.986501Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:29:05.986753Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"36b30da979eae81e","initial-advertise-peer-urls":["https://192.168.72.48:2380"],"listen-peer-urls":["https://192.168.72.48:2380"],"advertise-client-urls":["https://192.168.72.48:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.48:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:29:05.9868Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:29:05.986898Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.48:2380"}
	{"level":"info","ts":"2024-07-17T01:29:05.986923Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.48:2380"}
	{"level":"info","ts":"2024-07-17T01:29:07.811852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:29:07.811974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:29:07.812052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e received MsgPreVoteResp from 36b30da979eae81e at term 2"}
	{"level":"info","ts":"2024-07-17T01:29:07.812086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:29:07.81211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e received MsgVoteResp from 36b30da979eae81e at term 3"}
	{"level":"info","ts":"2024-07-17T01:29:07.812138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:29:07.812163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 36b30da979eae81e elected leader 36b30da979eae81e at term 3"}
	{"level":"info","ts":"2024-07-17T01:29:07.816516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:29:07.816465Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"36b30da979eae81e","local-member-attributes":"{Name:embed-certs-484167 ClientURLs:[https://192.168.72.48:2379]}","request-path":"/0/members/36b30da979eae81e/attributes","cluster-id":"a85db1df86d6d05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:29:07.817698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:29:07.818071Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:29:07.818129Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:29:07.819269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.48:2379"}
	{"level":"info","ts":"2024-07-17T01:29:07.82084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-17T01:29:27.563679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.264295ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16725965213138843811 > lease_revoke:<id:681e90be4eae772e>","response":"size:27"}
	{"level":"info","ts":"2024-07-17T01:37:50.821237Z","caller":"traceutil/trace.go:171","msg":"trace[545432555] transaction","detail":"{read_only:false; response_revision:975; number_of_response:1; }","duration":"208.478805ms","start":"2024-07-17T01:37:50.612708Z","end":"2024-07-17T01:37:50.821187Z","steps":["trace[545432555] 'process raft request'  (duration: 208.321309ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:39:07.85015Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":794}
	{"level":"info","ts":"2024-07-17T01:39:07.860878Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":794,"took":"9.907457ms","hash":2571012677,"current-db-size-bytes":2564096,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2564096,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-17T01:39:07.860978Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2571012677,"revision":794,"compact-revision":-1}
	
	
	==> kernel <==
	 01:42:37 up 13 min,  0 users,  load average: 0.01, 0.09, 0.08
	Linux embed-certs-484167 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] <==
	I0717 01:37:10.226239       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:39:09.228000       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:39:09.228420       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 01:39:10.229462       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:39:10.229596       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:39:10.229626       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:39:10.229733       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:39:10.229839       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:39:10.230824       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:40:10.230434       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:40:10.230552       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:40:10.230561       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:40:10.231539       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:40:10.231726       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:40:10.231779       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:42:10.231586       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:42:10.231890       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:42:10.232008       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:42:10.231993       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:42:10.232092       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:42:10.233841       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] <==
	I0717 01:36:54.869860       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:37:24.253691       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:37:24.877507       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:37:54.259176       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:37:54.886793       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:38:24.264228       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:38:24.894821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:38:54.268398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:38:54.905729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:39:24.272633       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:39:24.915470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:39:54.277931       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:39:54.924024       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:40:19.952015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="370.756µs"
	E0717 01:40:24.284395       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:40:24.931110       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:40:32.949779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="60.895µs"
	E0717 01:40:54.290323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:40:54.957542       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:41:24.297783       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:41:24.966421       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:41:54.302571       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:41:54.974135       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:42:24.308786       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:42:24.983552       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] <==
	I0717 01:29:10.571655       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:29:10.582119       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.48"]
	I0717 01:29:10.618911       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:29:10.618953       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:29:10.619008       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:29:10.621543       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:29:10.621805       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:29:10.621829       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:29:10.623123       1 config.go:192] "Starting service config controller"
	I0717 01:29:10.623160       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:29:10.623185       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:29:10.623189       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:29:10.623780       1 config.go:319] "Starting node config controller"
	I0717 01:29:10.623806       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:29:10.723529       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:29:10.723621       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:29:10.723873       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] <==
	I0717 01:29:06.364868       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:29:09.149828       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:29:09.149986       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:29:09.150080       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:29:09.150111       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:29:09.192931       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:29:09.193086       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:29:09.206099       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:29:09.208242       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:29:09.208297       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:29:09.208334       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:29:09.309478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:40:08 embed-certs-484167 kubelet[934]: E0717 01:40:08.950546     934 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 01:40:08 embed-certs-484167 kubelet[934]: E0717 01:40:08.951038     934 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 01:40:08 embed-certs-484167 kubelet[934]: E0717 01:40:08.951861     934 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zvfpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-2qwf6_kube-system(caefc20d-d993-46cb-b815-e4ae30ce4e85): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 01:40:08 embed-certs-484167 kubelet[934]: E0717 01:40:08.952306     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:40:19 embed-certs-484167 kubelet[934]: E0717 01:40:19.934751     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:40:32 embed-certs-484167 kubelet[934]: E0717 01:40:32.933995     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:40:46 embed-certs-484167 kubelet[934]: E0717 01:40:46.933481     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:40:59 embed-certs-484167 kubelet[934]: E0717 01:40:59.934610     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:41:04 embed-certs-484167 kubelet[934]: E0717 01:41:04.954667     934 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:41:04 embed-certs-484167 kubelet[934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:41:04 embed-certs-484167 kubelet[934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:41:04 embed-certs-484167 kubelet[934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:41:04 embed-certs-484167 kubelet[934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:41:14 embed-certs-484167 kubelet[934]: E0717 01:41:14.934943     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:41:29 embed-certs-484167 kubelet[934]: E0717 01:41:29.934529     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:41:41 embed-certs-484167 kubelet[934]: E0717 01:41:41.934416     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:41:53 embed-certs-484167 kubelet[934]: E0717 01:41:53.934195     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:42:04 embed-certs-484167 kubelet[934]: E0717 01:42:04.956753     934 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:42:04 embed-certs-484167 kubelet[934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:42:04 embed-certs-484167 kubelet[934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:42:04 embed-certs-484167 kubelet[934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:42:04 embed-certs-484167 kubelet[934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:42:07 embed-certs-484167 kubelet[934]: E0717 01:42:07.934338     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:42:21 embed-certs-484167 kubelet[934]: E0717 01:42:21.933932     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:42:33 embed-certs-484167 kubelet[934]: E0717 01:42:33.933865     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	
	
	==> storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] <==
	I0717 01:29:41.267596       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:29:41.279238       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:29:41.279312       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:29:58.678058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:29:58.678222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-484167_47d48a8e-425f-4307-803e-6d7e5fd0690c!
	I0717 01:29:58.679652       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d2ee878-b2ac-4f2d-a5aa-b2ff6d096a10", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-484167_47d48a8e-425f-4307-803e-6d7e5fd0690c became leader
	I0717 01:29:58.778949       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-484167_47d48a8e-425f-4307-803e-6d7e5fd0690c!
	
	
	==> storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] <==
	I0717 01:29:10.535645       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:29:40.538575       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-484167 -n embed-certs-484167
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-484167 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-2qwf6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-484167 describe pod metrics-server-569cc877fc-2qwf6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-484167 describe pod metrics-server-569cc877fc-2qwf6: exit status 1 (59.053222ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-2qwf6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-484167 describe pod metrics-server-569cc877fc-2qwf6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 01:43:01.106381573 +0000 UTC m=+5911.910527398
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-945694 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-945694 logs -n 25: (1.270742843s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-621535                              | stopped-upgrade-621535       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:19 UTC |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729236                           | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-249342             | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:22 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-484167            | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC | 17 Jul 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-945694  | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC | 17 Jul 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-484167                 | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC | 17 Jul 24 01:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-945694       | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:34 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:28 UTC |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:30 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-818382             | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:30 UTC | 17 Jul 24 01:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-818382                                   | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-818382                  | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC | 17 Jul 24 01:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:32:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:32:43.547613   69161 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:32:43.547856   69161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:43.547865   69161 out.go:304] Setting ErrFile to fd 2...
	I0717 01:32:43.547869   69161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:43.548058   69161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:32:43.548591   69161 out.go:298] Setting JSON to false
	I0717 01:32:43.549476   69161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8113,"bootTime":1721171851,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:32:43.549531   69161 start.go:139] virtualization: kvm guest
	I0717 01:32:43.551667   69161 out.go:177] * [no-preload-818382] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:32:43.552978   69161 notify.go:220] Checking for updates...
	I0717 01:32:43.553027   69161 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:32:43.554498   69161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:32:43.555767   69161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:32:43.557080   69161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:32:43.558402   69161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:32:43.559566   69161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:32:43.561137   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:32:43.561542   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.561591   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.576810   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0717 01:32:43.577217   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.577724   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.577746   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.578068   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.578246   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.578474   69161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:32:43.578722   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.578751   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.593634   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0717 01:32:43.594007   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.594435   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.594460   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.594810   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.594984   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.632126   69161 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:32:43.633290   69161 start.go:297] selected driver: kvm2
	I0717 01:32:43.633305   69161 start.go:901] validating driver "kvm2" against &{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:43.633393   69161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:32:43.634018   69161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.634085   69161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:32:43.648838   69161 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:32:43.649342   69161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:32:43.649377   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:32:43.649388   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:32:43.649454   69161 start.go:340] cluster config:
	{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:43.649575   69161 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.651213   69161 out.go:177] * Starting "no-preload-818382" primary control-plane node in "no-preload-818382" cluster
	I0717 01:32:43.652698   69161 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:32:43.652866   69161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:32:43.652971   69161 cache.go:107] acquiring lock: {Name:mk0dda4d4cdd92722b746ab931e6544cfc8daee5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.652980   69161 cache.go:107] acquiring lock: {Name:mk1de3a52aa61e3b4e847379240ac3935bedb199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653004   69161 cache.go:107] acquiring lock: {Name:mkf6e5b69e84ed3f384772a188b9364b7e3d5b5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653072   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 01:32:43.653091   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0717 01:32:43.653102   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0717 01:32:43.653107   69161 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 146.502µs
	I0717 01:32:43.653119   69161 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653117   69161 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 121.37µs
	I0717 01:32:43.653137   69161 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653098   69161 cache.go:107] acquiring lock: {Name:mkf2f11535addf893c2faa84c376231e8d922e64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653127   69161 cache.go:107] acquiring lock: {Name:mk0f717937d10c133c40dfa3d731090d6e186c8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653157   69161 cache.go:107] acquiring lock: {Name:mkddaaee919763be73bfba0c581555b8cc97a67b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653143   69161 cache.go:107] acquiring lock: {Name:mkecaf352dd381368806d2a149fd31f0c349a680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653184   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 exists
	I0717 01:32:43.653170   69161 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:32:43.653201   69161 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0" took 76.404µs
	I0717 01:32:43.653211   69161 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0717 01:32:43.653256   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0717 01:32:43.653259   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0717 01:32:43.653270   69161 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 131.092µs
	I0717 01:32:43.653278   69161 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653278   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0717 01:32:43.653273   69161 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 220.448µs
	I0717 01:32:43.653293   69161 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0717 01:32:43.653292   69161 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 138.342µs
	I0717 01:32:43.653303   69161 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0717 01:32:43.653142   69161 cache.go:107] acquiring lock: {Name:mk2ca5e82f37242a4f02d1776db6559bdb43421e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653316   69161 start.go:364] duration metric: took 84.706µs to acquireMachinesLock for "no-preload-818382"
	I0717 01:32:43.653101   69161 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 132.422µs
	I0717 01:32:43.653358   69161 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:32:43.653360   69161 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 01:32:43.653365   69161 fix.go:54] fixHost starting: 
	I0717 01:32:43.653345   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0717 01:32:43.653380   69161 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 247.182µs
	I0717 01:32:43.653397   69161 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653413   69161 cache.go:87] Successfully saved all images to host disk.
	I0717 01:32:43.653791   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.653851   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.669140   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0717 01:32:43.669544   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.669975   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.669995   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.670285   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.670451   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.670597   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:32:43.672083   69161 fix.go:112] recreateIfNeeded on no-preload-818382: state=Running err=<nil>
	W0717 01:32:43.672118   69161 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:32:43.674037   69161 out.go:177] * Updating the running kvm2 "no-preload-818382" VM ...
	I0717 01:32:40.312635   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:42.810125   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:44.006444   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:46.006933   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:43.675220   69161 machine.go:94] provisionDockerMachine start ...
	I0717 01:32:43.675236   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.675410   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:32:43.677780   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:32:43.678159   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:29:11 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:32:43.678194   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:32:43.678285   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:32:43.678480   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:32:43.678635   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:32:43.678751   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:32:43.678900   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:43.679072   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:32:43.679082   69161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:32:46.576890   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:44.811604   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:47.310107   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:49.310610   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:48.007526   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:50.506280   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:49.648813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:51.310765   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:53.810052   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:53.007282   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:55.506679   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:57.506743   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:55.728954   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:55.810343   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:57.810539   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:00.007367   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.509717   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:58.800813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:59.810958   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.310473   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.804718   66659 pod_ready.go:81] duration metric: took 4m0.000441849s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:02.804758   66659 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 01:33:02.804776   66659 pod_ready.go:38] duration metric: took 4m11.542416864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:02.804800   66659 kubeadm.go:597] duration metric: took 4m19.055059195s to restartPrimaryControlPlane
	W0717 01:33:02.804851   66659 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 01:33:02.804875   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:33:05.008344   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:07.008631   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:04.880862   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:07.956811   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:09.506709   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:12.007454   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:14.007849   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:16.506348   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:17.072888   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:19.005817   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:21.006641   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:20.144862   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:23.007827   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:24.506621   66178 pod_ready.go:81] duration metric: took 4m0.006337956s for pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:24.506648   66178 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:33:24.506656   66178 pod_ready.go:38] duration metric: took 4m4.541684979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:24.506672   66178 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:24.506700   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:24.506752   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:24.553972   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:24.553994   66178 cri.go:89] found id: ""
	I0717 01:33:24.554003   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:24.554067   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.558329   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:24.558382   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:24.593681   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:24.593710   66178 cri.go:89] found id: ""
	I0717 01:33:24.593717   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:24.593764   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.598462   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:24.598521   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:24.638597   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:24.638617   66178 cri.go:89] found id: ""
	I0717 01:33:24.638624   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:24.638674   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.642611   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:24.642674   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:24.678207   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:24.678227   66178 cri.go:89] found id: ""
	I0717 01:33:24.678233   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:24.678284   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.682820   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:24.682884   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:24.724141   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:24.724170   66178 cri.go:89] found id: ""
	I0717 01:33:24.724179   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:24.724231   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.729301   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:24.729355   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:24.765894   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:24.765916   66178 cri.go:89] found id: ""
	I0717 01:33:24.765925   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:24.765970   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.770898   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:24.770951   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:24.805812   66178 cri.go:89] found id: ""
	I0717 01:33:24.805835   66178 logs.go:276] 0 containers: []
	W0717 01:33:24.805842   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:24.805848   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:24.805897   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:24.847766   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:24.847788   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:24.847794   66178 cri.go:89] found id: ""
	I0717 01:33:24.847802   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:24.847852   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.852045   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.856136   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:24.856161   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:24.892801   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:24.892829   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:24.944203   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:24.944236   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:25.482400   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:25.482440   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:25.544150   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:25.544190   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:25.559587   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:25.559620   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:25.679463   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:25.679488   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:25.725117   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:25.725144   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:25.771390   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:25.771417   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:25.818766   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:25.818792   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:25.861973   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:25.862008   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:25.899694   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:25.899723   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:25.937573   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:25.937604   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:26.224800   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:28.476050   66178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:28.491506   66178 api_server.go:72] duration metric: took 4m14.298590069s to wait for apiserver process to appear ...
	I0717 01:33:28.491527   66178 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:28.491568   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:28.491626   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:28.526854   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:28.526882   66178 cri.go:89] found id: ""
	I0717 01:33:28.526891   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:28.526957   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.531219   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:28.531282   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:28.567901   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:28.567927   66178 cri.go:89] found id: ""
	I0717 01:33:28.567937   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:28.567995   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.572030   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:28.572094   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:28.606586   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:28.606610   66178 cri.go:89] found id: ""
	I0717 01:33:28.606622   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:28.606679   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.611494   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:28.611555   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:28.647224   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:28.647247   66178 cri.go:89] found id: ""
	I0717 01:33:28.647255   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:28.647311   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.651314   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:28.651376   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:28.686387   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:28.686412   66178 cri.go:89] found id: ""
	I0717 01:33:28.686420   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:28.686473   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.691061   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:28.691128   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:28.728066   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:28.728091   66178 cri.go:89] found id: ""
	I0717 01:33:28.728099   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:28.728147   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.732397   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:28.732446   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:28.770233   66178 cri.go:89] found id: ""
	I0717 01:33:28.770261   66178 logs.go:276] 0 containers: []
	W0717 01:33:28.770270   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:28.770277   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:28.770338   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:28.806271   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:28.806296   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:28.806302   66178 cri.go:89] found id: ""
	I0717 01:33:28.806311   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:28.806371   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.810691   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.814958   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:28.814976   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:28.856685   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:28.856712   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:28.897748   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:28.897790   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:28.958202   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:28.958228   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:28.999474   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:28.999501   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:29.035726   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:29.035758   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:29.072498   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:29.072524   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:29.110199   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:29.110226   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:29.144474   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:29.144506   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:29.196286   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:29.196315   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:29.210251   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:29.210274   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:29.313845   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:29.313877   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:29.748683   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:29.748719   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:32.292005   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:33:32.296375   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 200:
	ok
	I0717 01:33:32.297480   66178 api_server.go:141] control plane version: v1.30.2
	I0717 01:33:32.297499   66178 api_server.go:131] duration metric: took 3.805966225s to wait for apiserver health ...
	I0717 01:33:32.297507   66178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:32.297528   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:32.297569   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:32.336526   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:32.336566   66178 cri.go:89] found id: ""
	I0717 01:33:32.336576   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:32.336629   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.340838   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:32.340904   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:32.375827   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:32.375853   66178 cri.go:89] found id: ""
	I0717 01:33:32.375862   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:32.375920   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.380212   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:32.380269   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:32.417036   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:32.417063   66178 cri.go:89] found id: ""
	I0717 01:33:32.417075   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:32.417140   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.421437   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:32.421507   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:32.455708   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:32.455732   66178 cri.go:89] found id: ""
	I0717 01:33:32.455741   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:32.455799   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.464218   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:32.464299   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:32.506931   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:32.506958   66178 cri.go:89] found id: ""
	I0717 01:33:32.506968   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:32.507030   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.511493   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:32.511562   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:32.554706   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:32.554731   66178 cri.go:89] found id: ""
	I0717 01:33:32.554741   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:32.554806   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.559101   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:32.559175   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:32.598078   66178 cri.go:89] found id: ""
	I0717 01:33:32.598113   66178 logs.go:276] 0 containers: []
	W0717 01:33:32.598126   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:32.598135   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:32.598209   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:29.300812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:34.426424   66659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.621528106s)
	I0717 01:33:34.426506   66659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:34.441446   66659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:33:34.451230   66659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:33:34.460682   66659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:33:34.460702   66659 kubeadm.go:157] found existing configuration files:
	
	I0717 01:33:34.460746   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:33:34.469447   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:33:34.469496   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:33:34.478412   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:33:34.487047   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:33:34.487096   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:33:34.496243   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:33:34.504852   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:33:34.504907   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:33:34.513592   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:33:34.521997   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:33:34.522048   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:33:34.530773   66659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:33:32.639086   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:32.639113   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:32.639119   66178 cri.go:89] found id: ""
	I0717 01:33:32.639127   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:32.639185   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.643404   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.648144   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:32.648165   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:32.700179   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:32.700212   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:33.091798   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:33.091840   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:33.142057   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:33.142095   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:33.197532   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:33.197567   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:33.248356   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:33.248393   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:33.290624   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:33.290652   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:33.338525   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:33.338557   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:33.379963   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:33.379998   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:33.393448   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:33.393472   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:33.497330   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:33.497366   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:33.534015   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:33.534048   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:33.569753   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:33.569779   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:36.112668   66178 system_pods.go:59] 8 kube-system pods found
	I0717 01:33:36.112698   66178 system_pods.go:61] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running
	I0717 01:33:36.112704   66178 system_pods.go:61] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running
	I0717 01:33:36.112710   66178 system_pods.go:61] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running
	I0717 01:33:36.112716   66178 system_pods.go:61] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running
	I0717 01:33:36.112721   66178 system_pods.go:61] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:33:36.112726   66178 system_pods.go:61] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running
	I0717 01:33:36.112734   66178 system_pods.go:61] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:36.112741   66178 system_pods.go:61] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:33:36.112752   66178 system_pods.go:74] duration metric: took 3.81523968s to wait for pod list to return data ...
	I0717 01:33:36.112760   66178 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:33:36.114860   66178 default_sa.go:45] found service account: "default"
	I0717 01:33:36.114880   66178 default_sa.go:55] duration metric: took 2.115012ms for default service account to be created ...
	I0717 01:33:36.114888   66178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:33:36.119333   66178 system_pods.go:86] 8 kube-system pods found
	I0717 01:33:36.119357   66178 system_pods.go:89] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running
	I0717 01:33:36.119363   66178 system_pods.go:89] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running
	I0717 01:33:36.119368   66178 system_pods.go:89] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running
	I0717 01:33:36.119372   66178 system_pods.go:89] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running
	I0717 01:33:36.119376   66178 system_pods.go:89] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:33:36.119382   66178 system_pods.go:89] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running
	I0717 01:33:36.119392   66178 system_pods.go:89] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:36.119401   66178 system_pods.go:89] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:33:36.119410   66178 system_pods.go:126] duration metric: took 4.516516ms to wait for k8s-apps to be running ...
	I0717 01:33:36.119423   66178 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:33:36.119469   66178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:36.135747   66178 system_svc.go:56] duration metric: took 16.316004ms WaitForService to wait for kubelet
	I0717 01:33:36.135778   66178 kubeadm.go:582] duration metric: took 4m21.94286469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:33:36.135806   66178 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:33:36.140253   66178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:33:36.140274   66178 node_conditions.go:123] node cpu capacity is 2
	I0717 01:33:36.140285   66178 node_conditions.go:105] duration metric: took 4.473888ms to run NodePressure ...
	I0717 01:33:36.140296   66178 start.go:241] waiting for startup goroutines ...
	I0717 01:33:36.140306   66178 start.go:246] waiting for cluster config update ...
	I0717 01:33:36.140326   66178 start.go:255] writing updated cluster config ...
	I0717 01:33:36.140642   66178 ssh_runner.go:195] Run: rm -f paused
	I0717 01:33:36.188858   66178 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:33:36.191016   66178 out.go:177] * Done! kubectl is now configured to use "embed-certs-484167" cluster and "default" namespace by default
	I0717 01:33:35.376822   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:38.448812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:34.720645   66659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:33:43.308866   66659 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:33:43.308943   66659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:33:43.309108   66659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:33:43.309260   66659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:33:43.309392   66659 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:33:43.309485   66659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:33:43.311060   66659 out.go:204]   - Generating certificates and keys ...
	I0717 01:33:43.311120   66659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:33:43.311229   66659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:33:43.311320   66659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:33:43.311396   66659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:33:43.311505   66659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:33:43.311595   66659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:33:43.311682   66659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:33:43.311746   66659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:33:43.311807   66659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:33:43.311893   66659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:33:43.311960   66659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:33:43.312019   66659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:33:43.312083   66659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:33:43.312165   66659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:33:43.312247   66659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:33:43.312337   66659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:33:43.312395   66659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:33:43.312479   66659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:33:43.312534   66659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:33:43.313917   66659 out.go:204]   - Booting up control plane ...
	I0717 01:33:43.313994   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:33:43.314085   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:33:43.314183   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:33:43.314304   66659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:33:43.314415   66659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:33:43.314471   66659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:33:43.314608   66659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:33:43.314728   66659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:33:43.314817   66659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00137795s
	I0717 01:33:43.314955   66659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:33:43.315048   66659 kubeadm.go:310] [api-check] The API server is healthy after 5.002451289s
	I0717 01:33:43.315206   66659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 01:33:43.315310   66659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 01:33:43.315364   66659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 01:33:43.315550   66659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-945694 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 01:33:43.315640   66659 kubeadm.go:310] [bootstrap-token] Using token: eqtrsf.jetqj440l3wkhk98
	I0717 01:33:43.317933   66659 out.go:204]   - Configuring RBAC rules ...
	I0717 01:33:43.318050   66659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 01:33:43.318148   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 01:33:43.318293   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 01:33:43.318405   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 01:33:43.318513   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 01:33:43.318599   66659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 01:33:43.318755   66659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 01:33:43.318831   66659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 01:33:43.318883   66659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 01:33:43.318890   66659 kubeadm.go:310] 
	I0717 01:33:43.318937   66659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 01:33:43.318945   66659 kubeadm.go:310] 
	I0717 01:33:43.319058   66659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 01:33:43.319068   66659 kubeadm.go:310] 
	I0717 01:33:43.319102   66659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 01:33:43.319189   66659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 01:33:43.319251   66659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 01:33:43.319257   66659 kubeadm.go:310] 
	I0717 01:33:43.319333   66659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 01:33:43.319343   66659 kubeadm.go:310] 
	I0717 01:33:43.319407   66659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 01:33:43.319416   66659 kubeadm.go:310] 
	I0717 01:33:43.319485   66659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 01:33:43.319607   66659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 01:33:43.319690   66659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 01:33:43.319698   66659 kubeadm.go:310] 
	I0717 01:33:43.319797   66659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 01:33:43.319910   66659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 01:33:43.319925   66659 kubeadm.go:310] 
	I0717 01:33:43.320045   66659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token eqtrsf.jetqj440l3wkhk98 \
	I0717 01:33:43.320187   66659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 01:33:43.320232   66659 kubeadm.go:310] 	--control-plane 
	I0717 01:33:43.320239   66659 kubeadm.go:310] 
	I0717 01:33:43.320349   66659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 01:33:43.320359   66659 kubeadm.go:310] 
	I0717 01:33:43.320469   66659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token eqtrsf.jetqj440l3wkhk98 \
	I0717 01:33:43.320642   66659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 01:33:43.320672   66659 cni.go:84] Creating CNI manager for ""
	I0717 01:33:43.320685   66659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:33:43.322373   66659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:33:43.323549   66659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:33:43.336069   66659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:33:43.354981   66659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:33:43.355060   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:43.355068   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-945694 minikube.k8s.io/updated_at=2024_07_17T01_33_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=default-k8s-diff-port-945694 minikube.k8s.io/primary=true
	I0717 01:33:43.564470   66659 ops.go:34] apiserver oom_adj: -16
	I0717 01:33:43.564611   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:44.065352   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:44.528766   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:47.604799   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:44.565059   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:45.065658   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:45.565085   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:46.064718   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:46.564689   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:47.064998   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:47.564664   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:48.064694   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:48.565187   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:49.065439   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:49.564950   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:50.065001   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:50.565505   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:51.065369   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:51.564969   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:52.065293   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:52.564953   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:53.065324   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:53.565120   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:54.065189   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:54.565611   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:55.065105   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:55.565494   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.065453   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.565393   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.656280   66659 kubeadm.go:1113] duration metric: took 13.301288619s to wait for elevateKubeSystemPrivileges
	I0717 01:33:56.656319   66659 kubeadm.go:394] duration metric: took 5m12.994113939s to StartCluster
	I0717 01:33:56.656341   66659 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:56.656429   66659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:33:56.658062   66659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:56.658318   66659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:33:56.658384   66659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:33:56.658471   66659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658506   66659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.658516   66659 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:33:56.658514   66659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658545   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.658544   66659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658565   66659 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:33:56.658566   66659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-945694"
	I0717 01:33:56.658590   66659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.658603   66659 addons.go:243] addon metrics-server should already be in state true
	I0717 01:33:56.658631   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.658840   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.658867   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.658941   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.658967   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.658946   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.659047   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.660042   66659 out.go:177] * Verifying Kubernetes components...
	I0717 01:33:56.661365   66659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:33:56.675427   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0717 01:33:56.675919   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.676434   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.676455   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.676887   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.677764   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.677807   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.678856   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44785
	I0717 01:33:56.679033   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0717 01:33:56.679281   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.679550   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.680055   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.680079   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.680153   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.680173   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.680443   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.680523   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.680711   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.681210   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.681252   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.684317   66659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.684338   66659 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:33:56.684362   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.684670   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.684706   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.693393   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0717 01:33:56.693836   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.694292   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.694309   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.694640   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.694801   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.696212   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.698217   66659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:33:56.699432   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:33:56.699455   66659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:33:56.699472   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.700565   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0717 01:33:56.701036   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.701563   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.701578   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.701920   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.702150   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.702903   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.703250   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.703275   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.703457   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.703732   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.703951   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.704282   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.704422   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.705576   66659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:33:56.707192   66659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:56.707207   66659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:33:56.707219   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.707551   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0717 01:33:56.708045   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.708589   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.708611   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.708957   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.709503   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.709545   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.710201   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.710818   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.710854   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.711103   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.711476   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.711751   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.711938   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.724041   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0717 01:33:56.724450   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.724943   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.724965   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.725264   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.725481   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.727357   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.727567   66659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:56.727579   66659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:33:56.727592   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.730575   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.730916   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.730930   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.731147   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.731295   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.731414   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.731558   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.880324   66659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:33:56.907224   66659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-945694" to be "Ready" ...
	I0717 01:33:56.916791   66659 node_ready.go:49] node "default-k8s-diff-port-945694" has status "Ready":"True"
	I0717 01:33:56.916814   66659 node_ready.go:38] duration metric: took 9.553813ms for node "default-k8s-diff-port-945694" to be "Ready" ...
	I0717 01:33:56.916825   66659 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:56.929744   66659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:56.991132   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:57.020549   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:33:57.020582   66659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:33:57.041856   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:57.095649   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:33:57.095672   66659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:33:57.145707   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:33:57.145737   66659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:33:57.220983   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:33:57.569863   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.569888   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.569966   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.569995   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570184   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570210   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.570221   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.570221   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570255   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570230   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570274   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570289   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.570314   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.570325   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570476   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570508   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570514   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.572038   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.572054   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.572095   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.584086   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.584114   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.584383   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.584402   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.951559   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.951583   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.952039   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.952039   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.952055   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.952068   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.952076   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.952317   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.952328   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.952338   66659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-945694"
	I0717 01:33:57.954803   66659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:33:53.680800   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:56.752809   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:57.956002   66659 addons.go:510] duration metric: took 1.29761252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:33:58.936404   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.936430   66659 pod_ready.go:81] duration metric: took 2.006657028s for pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.936440   66659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.940948   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.940968   66659 pod_ready.go:81] duration metric: took 4.522302ms for pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.940976   66659 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.944815   66659 pod_ready.go:92] pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.944830   66659 pod_ready.go:81] duration metric: took 3.847888ms for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.944838   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.949022   66659 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.949039   66659 pod_ready.go:81] duration metric: took 4.196556ms for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.949049   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.953438   66659 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.953456   66659 pod_ready.go:81] duration metric: took 4.401091ms for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.953467   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55xmv" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.335149   66659 pod_ready.go:92] pod "kube-proxy-55xmv" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:59.335174   66659 pod_ready.go:81] duration metric: took 381.700119ms for pod "kube-proxy-55xmv" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.335187   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.734445   66659 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:59.734473   66659 pod_ready.go:81] duration metric: took 399.276861ms for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.734483   66659 pod_ready.go:38] duration metric: took 2.817646454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:59.734499   66659 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:59.734557   66659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:59.750547   66659 api_server.go:72] duration metric: took 3.092197547s to wait for apiserver process to appear ...
	I0717 01:33:59.750573   66659 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:59.750595   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:33:59.755670   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0717 01:33:59.756553   66659 api_server.go:141] control plane version: v1.30.2
	I0717 01:33:59.756591   66659 api_server.go:131] duration metric: took 6.009468ms to wait for apiserver health ...
	I0717 01:33:59.756599   66659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:59.938573   66659 system_pods.go:59] 9 kube-system pods found
	I0717 01:33:59.938605   66659 system_pods.go:61] "coredns-7db6d8ff4d-jbsq5" [0a95f33d-19ef-4b2e-a94e-08bbcaff92dc] Running
	I0717 01:33:59.938611   66659 system_pods.go:61] "coredns-7db6d8ff4d-mqjqg" [ca27ce06-d171-4edd-9a1d-11898283f3ac] Running
	I0717 01:33:59.938615   66659 system_pods.go:61] "etcd-default-k8s-diff-port-945694" [213d53e1-92c9-4b8a-b9ff-6b7f12acd149] Running
	I0717 01:33:59.938618   66659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-945694" [b22e53fb-feec-4684-a672-f9c9b326bc36] Running
	I0717 01:33:59.938622   66659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-945694" [dc840bd9-5087-4642-8e84-8392d188e85f] Running
	I0717 01:33:59.938626   66659 system_pods.go:61] "kube-proxy-55xmv" [ee6913d5-3362-4a9f-a159-1f9b1da7380a] Running
	I0717 01:33:59.938631   66659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-945694" [7bfa8bdb-a9af-4e6b-8a11-f9b6791e2647] Running
	I0717 01:33:59.938640   66659 system_pods.go:61] "metrics-server-569cc877fc-4nffv" [ba214ec1-a180-42ec-847e-80464e102765] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:59.938646   66659 system_pods.go:61] "storage-provisioner" [3352a0de-41db-4537-b87a-24137084aa7a] Running
	I0717 01:33:59.938657   66659 system_pods.go:74] duration metric: took 182.050448ms to wait for pod list to return data ...
	I0717 01:33:59.938669   66659 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:34:00.133695   66659 default_sa.go:45] found service account: "default"
	I0717 01:34:00.133719   66659 default_sa.go:55] duration metric: took 195.042344ms for default service account to be created ...
	I0717 01:34:00.133729   66659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:34:00.338087   66659 system_pods.go:86] 9 kube-system pods found
	I0717 01:34:00.338127   66659 system_pods.go:89] "coredns-7db6d8ff4d-jbsq5" [0a95f33d-19ef-4b2e-a94e-08bbcaff92dc] Running
	I0717 01:34:00.338137   66659 system_pods.go:89] "coredns-7db6d8ff4d-mqjqg" [ca27ce06-d171-4edd-9a1d-11898283f3ac] Running
	I0717 01:34:00.338143   66659 system_pods.go:89] "etcd-default-k8s-diff-port-945694" [213d53e1-92c9-4b8a-b9ff-6b7f12acd149] Running
	I0717 01:34:00.338151   66659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-945694" [b22e53fb-feec-4684-a672-f9c9b326bc36] Running
	I0717 01:34:00.338159   66659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-945694" [dc840bd9-5087-4642-8e84-8392d188e85f] Running
	I0717 01:34:00.338166   66659 system_pods.go:89] "kube-proxy-55xmv" [ee6913d5-3362-4a9f-a159-1f9b1da7380a] Running
	I0717 01:34:00.338173   66659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-945694" [7bfa8bdb-a9af-4e6b-8a11-f9b6791e2647] Running
	I0717 01:34:00.338184   66659 system_pods.go:89] "metrics-server-569cc877fc-4nffv" [ba214ec1-a180-42ec-847e-80464e102765] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:34:00.338196   66659 system_pods.go:89] "storage-provisioner" [3352a0de-41db-4537-b87a-24137084aa7a] Running
	I0717 01:34:00.338205   66659 system_pods.go:126] duration metric: took 204.470489ms to wait for k8s-apps to be running ...
	I0717 01:34:00.338218   66659 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:34:00.338274   66659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:34:00.352151   66659 system_svc.go:56] duration metric: took 13.921542ms WaitForService to wait for kubelet
	I0717 01:34:00.352188   66659 kubeadm.go:582] duration metric: took 3.693843091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:34:00.352213   66659 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:34:00.535457   66659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:34:00.535478   66659 node_conditions.go:123] node cpu capacity is 2
	I0717 01:34:00.535489   66659 node_conditions.go:105] duration metric: took 183.271273ms to run NodePressure ...
	I0717 01:34:00.535500   66659 start.go:241] waiting for startup goroutines ...
	I0717 01:34:00.535506   66659 start.go:246] waiting for cluster config update ...
	I0717 01:34:00.535515   66659 start.go:255] writing updated cluster config ...
	I0717 01:34:00.535731   66659 ssh_runner.go:195] Run: rm -f paused
	I0717 01:34:00.581917   66659 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:34:00.583994   66659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-945694" cluster and "default" namespace by default
	I0717 01:34:02.832840   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:05.904845   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:11.984893   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:15.056813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:21.136802   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:24.208771   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:30.288821   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:33.360818   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:39.440802   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:42.512824   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:48.592870   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:51.668822   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:57.744791   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:00.816890   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:06.896783   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:09.968897   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:16.048887   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:19.120810   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:25.200832   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:28.272897   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:34.352811   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:37.424805   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:43.504775   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:46.576767   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:52.656845   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:55.728841   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:01.808828   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:04.880828   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:10.964781   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:14.032790   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:20.112803   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:23.184780   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:29.264888   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:32.340810   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:38.416815   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:41.488801   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:47.572801   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:50.640840   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:56.720825   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:59.792797   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:05.876784   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:08.944812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:15.024792   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:18.096815   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:21.098660   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:37:21.098691   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:21.098996   69161 buildroot.go:166] provisioning hostname "no-preload-818382"
	I0717 01:37:21.099019   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:21.099239   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:21.100820   69161 machine.go:97] duration metric: took 4m37.425586326s to provisionDockerMachine
	I0717 01:37:21.100856   69161 fix.go:56] duration metric: took 4m37.44749197s for fixHost
	I0717 01:37:21.100862   69161 start.go:83] releasing machines lock for "no-preload-818382", held for 4m37.447517491s
	W0717 01:37:21.100875   69161 start.go:714] error starting host: provision: host is not running
	W0717 01:37:21.100944   69161 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:37:21.100953   69161 start.go:729] Will try again in 5 seconds ...
	I0717 01:37:26.102733   69161 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:37:26.102820   69161 start.go:364] duration metric: took 53.679µs to acquireMachinesLock for "no-preload-818382"
	I0717 01:37:26.102845   69161 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:37:26.102852   69161 fix.go:54] fixHost starting: 
	I0717 01:37:26.103150   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:37:26.103173   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:37:26.119906   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33241
	I0717 01:37:26.120394   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:37:26.120930   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:37:26.120952   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:37:26.121328   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:37:26.121541   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:26.121680   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:37:26.123050   69161 fix.go:112] recreateIfNeeded on no-preload-818382: state=Stopped err=<nil>
	I0717 01:37:26.123069   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	W0717 01:37:26.123226   69161 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:37:26.125020   69161 out.go:177] * Restarting existing kvm2 VM for "no-preload-818382" ...
	I0717 01:37:26.126273   69161 main.go:141] libmachine: (no-preload-818382) Calling .Start
	I0717 01:37:26.126469   69161 main.go:141] libmachine: (no-preload-818382) Ensuring networks are active...
	I0717 01:37:26.127225   69161 main.go:141] libmachine: (no-preload-818382) Ensuring network default is active
	I0717 01:37:26.127552   69161 main.go:141] libmachine: (no-preload-818382) Ensuring network mk-no-preload-818382 is active
	I0717 01:37:26.127899   69161 main.go:141] libmachine: (no-preload-818382) Getting domain xml...
	I0717 01:37:26.128571   69161 main.go:141] libmachine: (no-preload-818382) Creating domain...
	I0717 01:37:27.345119   69161 main.go:141] libmachine: (no-preload-818382) Waiting to get IP...
	I0717 01:37:27.346205   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.346716   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.346764   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.346681   70303 retry.go:31] will retry after 199.66464ms: waiting for machine to come up
	I0717 01:37:27.548206   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.548848   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.548873   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.548815   70303 retry.go:31] will retry after 280.929524ms: waiting for machine to come up
	I0717 01:37:27.831501   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.831934   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.831964   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.831916   70303 retry.go:31] will retry after 301.466781ms: waiting for machine to come up
	I0717 01:37:28.135465   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:28.135945   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:28.135981   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:28.135907   70303 retry.go:31] will retry after 393.103911ms: waiting for machine to come up
	I0717 01:37:28.530344   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:28.530791   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:28.530815   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:28.530761   70303 retry.go:31] will retry after 518.699896ms: waiting for machine to come up
	I0717 01:37:29.051266   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:29.051722   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:29.051763   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:29.051702   70303 retry.go:31] will retry after 618.253779ms: waiting for machine to come up
	I0717 01:37:29.671578   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:29.672083   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:29.672111   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:29.672032   70303 retry.go:31] will retry after 718.051367ms: waiting for machine to come up
	I0717 01:37:30.391904   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:30.392339   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:30.392367   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:30.392290   70303 retry.go:31] will retry after 1.040644293s: waiting for machine to come up
	I0717 01:37:31.434846   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:31.435419   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:31.435467   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:31.435401   70303 retry.go:31] will retry after 1.802022391s: waiting for machine to come up
	I0717 01:37:33.238798   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:33.239381   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:33.239409   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:33.239333   70303 retry.go:31] will retry after 1.417897015s: waiting for machine to come up
	I0717 01:37:34.658523   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:34.659018   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:34.659046   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:34.658971   70303 retry.go:31] will retry after 2.736057609s: waiting for machine to come up
	I0717 01:37:37.396582   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:37.397249   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:37.397279   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:37.397179   70303 retry.go:31] will retry after 2.2175965s: waiting for machine to come up
	I0717 01:37:39.616404   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:39.616819   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:39.616852   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:39.616775   70303 retry.go:31] will retry after 4.136811081s: waiting for machine to come up
	I0717 01:37:43.754795   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.755339   69161 main.go:141] libmachine: (no-preload-818382) Found IP for machine: 192.168.39.38
	I0717 01:37:43.755364   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has current primary IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.755370   69161 main.go:141] libmachine: (no-preload-818382) Reserving static IP address...
	I0717 01:37:43.755825   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "no-preload-818382", mac: "52:54:00:e4:de:04", ip: "192.168.39.38"} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.755856   69161 main.go:141] libmachine: (no-preload-818382) Reserved static IP address: 192.168.39.38
	I0717 01:37:43.755870   69161 main.go:141] libmachine: (no-preload-818382) DBG | skip adding static IP to network mk-no-preload-818382 - found existing host DHCP lease matching {name: "no-preload-818382", mac: "52:54:00:e4:de:04", ip: "192.168.39.38"}
	I0717 01:37:43.755885   69161 main.go:141] libmachine: (no-preload-818382) DBG | Getting to WaitForSSH function...
	I0717 01:37:43.755893   69161 main.go:141] libmachine: (no-preload-818382) Waiting for SSH to be available...
	I0717 01:37:43.758007   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.758337   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.758366   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.758581   69161 main.go:141] libmachine: (no-preload-818382) DBG | Using SSH client type: external
	I0717 01:37:43.758615   69161 main.go:141] libmachine: (no-preload-818382) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa (-rw-------)
	I0717 01:37:43.758640   69161 main.go:141] libmachine: (no-preload-818382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:37:43.758650   69161 main.go:141] libmachine: (no-preload-818382) DBG | About to run SSH command:
	I0717 01:37:43.758662   69161 main.go:141] libmachine: (no-preload-818382) DBG | exit 0
	I0717 01:37:43.884574   69161 main.go:141] libmachine: (no-preload-818382) DBG | SSH cmd err, output: <nil>: 
	I0717 01:37:43.884894   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetConfigRaw
	I0717 01:37:43.885637   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:43.888140   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.888641   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.888673   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.888992   69161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:37:43.889212   69161 machine.go:94] provisionDockerMachine start ...
	I0717 01:37:43.889237   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:43.889449   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:43.892095   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.892409   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.892451   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.892636   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:43.892814   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:43.892978   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:43.893129   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:43.893272   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:43.893472   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:43.893487   69161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:37:44.004698   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:37:44.004726   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.005009   69161 buildroot.go:166] provisioning hostname "no-preload-818382"
	I0717 01:37:44.005035   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.005206   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.008187   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.008700   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.008726   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.008920   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.009094   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.009286   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.009441   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.009612   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.009770   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.009781   69161 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-818382 && echo "no-preload-818382" | sudo tee /etc/hostname
	I0717 01:37:44.136253   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-818382
	
	I0717 01:37:44.136281   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.138973   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.139255   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.139284   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.139469   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.139643   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.139828   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.140012   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.140288   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.140479   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.140504   69161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-818382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-818382/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-818382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:37:44.266505   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:37:44.266534   69161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:37:44.266551   69161 buildroot.go:174] setting up certificates
	I0717 01:37:44.266562   69161 provision.go:84] configureAuth start
	I0717 01:37:44.266580   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.266878   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:44.269798   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.270235   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.270268   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.270404   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.272533   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.272880   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.272907   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.273042   69161 provision.go:143] copyHostCerts
	I0717 01:37:44.273125   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:37:44.273144   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:37:44.273206   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:37:44.273316   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:37:44.273326   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:37:44.273351   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:37:44.273410   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:37:44.273414   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:37:44.273433   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:37:44.273487   69161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.no-preload-818382 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-818382]
	I0717 01:37:44.479434   69161 provision.go:177] copyRemoteCerts
	I0717 01:37:44.479494   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:37:44.479540   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.482477   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.482908   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.482946   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.483128   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.483327   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.483455   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.483580   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:44.571236   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:37:44.596972   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:37:44.621104   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:37:44.643869   69161 provision.go:87] duration metric: took 377.294141ms to configureAuth
	I0717 01:37:44.643898   69161 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:37:44.644105   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:37:44.644180   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.646792   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.647149   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.647179   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.647336   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.647539   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.647675   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.647780   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.647927   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.648096   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.648110   69161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:37:44.939532   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:37:44.939559   69161 machine.go:97] duration metric: took 1.050331351s to provisionDockerMachine
	I0717 01:37:44.939571   69161 start.go:293] postStartSetup for "no-preload-818382" (driver="kvm2")
	I0717 01:37:44.939587   69161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:37:44.939631   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:44.940024   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:37:44.940056   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.942783   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.943199   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.943225   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.943340   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.943504   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.943643   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.943806   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.027519   69161 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:37:45.031577   69161 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:37:45.031599   69161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:37:45.031667   69161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:37:45.031760   69161 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:37:45.031877   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:37:45.041021   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:37:45.064965   69161 start.go:296] duration metric: took 125.382388ms for postStartSetup
	I0717 01:37:45.064998   69161 fix.go:56] duration metric: took 18.96214661s for fixHost
	I0717 01:37:45.065016   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.067787   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.068183   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.068217   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.068340   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.068582   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.068751   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.068904   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.069063   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:45.069226   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:45.069239   69161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:37:45.181490   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180265.155979386
	
	I0717 01:37:45.181513   69161 fix.go:216] guest clock: 1721180265.155979386
	I0717 01:37:45.181522   69161 fix.go:229] Guest: 2024-07-17 01:37:45.155979386 +0000 UTC Remote: 2024-07-17 01:37:45.065002166 +0000 UTC m=+301.553951222 (delta=90.97722ms)
	I0717 01:37:45.181546   69161 fix.go:200] guest clock delta is within tolerance: 90.97722ms
	I0717 01:37:45.181551   69161 start.go:83] releasing machines lock for "no-preload-818382", held for 19.07872127s
	I0717 01:37:45.181570   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.181832   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:45.184836   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.185246   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.185273   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.185420   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.185969   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.186161   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.186303   69161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:37:45.186354   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.186440   69161 ssh_runner.go:195] Run: cat /version.json
	I0717 01:37:45.186464   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.189106   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189351   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189501   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.189548   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189674   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.189876   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.189883   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.189910   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189957   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.190062   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.190122   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.190251   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.190283   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.190505   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.273517   69161 ssh_runner.go:195] Run: systemctl --version
	I0717 01:37:45.297810   69161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:37:45.444285   69161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:37:45.450949   69161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:37:45.451015   69161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:37:45.469442   69161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:37:45.469470   69161 start.go:495] detecting cgroup driver to use...
	I0717 01:37:45.469534   69161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:37:45.488907   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:37:45.503268   69161 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:37:45.503336   69161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:37:45.516933   69161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:37:45.530525   69161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:37:45.642175   69161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:37:45.802107   69161 docker.go:233] disabling docker service ...
	I0717 01:37:45.802170   69161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:37:45.815967   69161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:37:45.827961   69161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:37:45.948333   69161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:37:46.066388   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:37:46.081332   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:37:46.102124   69161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:37:46.102209   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.113289   69161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:37:46.113361   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.123902   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.133825   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.143399   69161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:37:46.153336   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.163110   69161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.179869   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.190114   69161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:37:46.199740   69161 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:37:46.199791   69161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:37:46.212405   69161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:37:46.223444   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:37:46.337353   69161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:37:46.486553   69161 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:37:46.486616   69161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:37:46.491747   69161 start.go:563] Will wait 60s for crictl version
	I0717 01:37:46.491820   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.495749   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:37:46.537334   69161 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:37:46.537418   69161 ssh_runner.go:195] Run: crio --version
	I0717 01:37:46.566918   69161 ssh_runner.go:195] Run: crio --version
	I0717 01:37:46.598762   69161 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:37:46.600041   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:46.602939   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:46.603358   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:46.603387   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:46.603645   69161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:37:46.607975   69161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:37:46.621718   69161 kubeadm.go:883] updating cluster {Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:37:46.621869   69161 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:37:46.621921   69161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:37:46.657321   69161 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:37:46.657346   69161 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:37:46.657389   69161 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.657417   69161 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.657446   69161 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:37:46.657480   69161 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.657596   69161 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.657645   69161 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.657653   69161 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.657733   69161 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.659108   69161 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:37:46.659120   69161 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.659172   69161 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.659109   69161 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.659171   69161 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.659209   69161 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.659210   69161 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.659110   69161 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.818816   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.824725   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.825088   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.825902   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.830336   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.842814   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:37:46.876989   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.906964   69161 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:37:46.907012   69161 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.907060   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.953522   69161 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:37:46.953572   69161 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.953624   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.985236   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.990623   69161 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:37:46.990667   69161 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.990715   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.000280   69161 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:37:47.000313   69161 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:47.000354   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.009927   69161 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:37:47.009976   69161 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:47.010045   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.124625   69161 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:37:47.124677   69161 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:47.124706   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:47.124718   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.124805   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:47.124853   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:47.124877   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:47.124906   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:47.124804   69161 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:37:47.124949   69161 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:47.124983   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.231159   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:47.231201   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:37:47.231217   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:47.231243   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:47.231263   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:47.231302   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.231349   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:47.231414   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:47.231570   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:47.231431   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:47.231464   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:37:47.231715   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:47.279220   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:37:47.279239   69161 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.279286   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.293132   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:37:47.293233   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293243   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:37:47.293309   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:37:47.293313   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293338   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293480   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:37:47.293582   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:37:51.052908   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.773599434s)
	I0717 01:37:51.052941   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:37:51.052963   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:51.052960   69161 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.759674708s)
	I0717 01:37:51.052994   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:37:51.053016   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:51.053020   69161 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.75941775s)
	I0717 01:37:51.053050   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:37:52.809764   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.756726059s)
	I0717 01:37:52.809790   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:37:52.809818   69161 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:52.809884   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:54.565189   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.755280201s)
	I0717 01:37:54.565217   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:37:54.565251   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:54.565341   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:56.720406   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.155036511s)
	I0717 01:37:56.720439   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:37:56.720473   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:56.720538   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:58.168141   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447572914s)
	I0717 01:37:58.168181   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:37:58.168216   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:37:58.168278   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:38:00.033559   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.865254148s)
	I0717 01:38:00.033590   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:38:00.033619   69161 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:38:00.033680   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:38:00.885074   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:38:00.885123   69161 cache_images.go:123] Successfully loaded all cached images
	I0717 01:38:00.885131   69161 cache_images.go:92] duration metric: took 14.22776998s to LoadCachedImages
	I0717 01:38:00.885149   69161 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:38:00.885276   69161 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-818382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:38:00.885360   69161 ssh_runner.go:195] Run: crio config
	I0717 01:38:00.935613   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:38:00.935637   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:00.935649   69161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:38:00.935674   69161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-818382 NodeName:no-preload-818382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:38:00.935799   69161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-818382"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:38:00.935866   69161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:38:00.946897   69161 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:38:00.946982   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:38:00.956493   69161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 01:38:00.974619   69161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:38:00.992580   69161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 01:38:01.009552   69161 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0717 01:38:01.013704   69161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:38:01.026053   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:01.150532   69161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:01.167166   69161 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382 for IP: 192.168.39.38
	I0717 01:38:01.167196   69161 certs.go:194] generating shared ca certs ...
	I0717 01:38:01.167219   69161 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:01.167398   69161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:38:01.167485   69161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:38:01.167504   69161 certs.go:256] generating profile certs ...
	I0717 01:38:01.167622   69161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/client.key
	I0717 01:38:01.167740   69161 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.key.0a44641a
	I0717 01:38:01.167811   69161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.key
	I0717 01:38:01.167996   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:38:01.168037   69161 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:38:01.168049   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:38:01.168094   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:38:01.168137   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:38:01.168176   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:38:01.168241   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:38:01.169161   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:38:01.202385   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:38:01.236910   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:38:01.270000   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:38:01.306655   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:38:01.355634   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:38:01.386958   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:38:01.411202   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:38:01.435949   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:38:01.460843   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:38:01.486827   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:38:01.511874   69161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:38:01.529784   69161 ssh_runner.go:195] Run: openssl version
	I0717 01:38:01.535968   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:38:01.547564   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.552546   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.552611   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.558592   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:38:01.569461   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:38:01.580422   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.585228   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.585276   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.591149   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:38:01.602249   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:38:01.614146   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.618807   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.618868   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.624861   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:38:01.635446   69161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:38:01.640287   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:38:01.646102   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:38:01.651967   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:38:01.658169   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:38:01.664359   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:38:01.670597   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:38:01.677288   69161 kubeadm.go:392] StartCluster: {Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:38:01.677378   69161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:38:01.677434   69161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:01.718896   69161 cri.go:89] found id: ""
	I0717 01:38:01.718964   69161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:38:01.730404   69161 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:38:01.730426   69161 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:38:01.730467   69161 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:38:01.742131   69161 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:38:01.743114   69161 kubeconfig.go:125] found "no-preload-818382" server: "https://192.168.39.38:8443"
	I0717 01:38:01.745151   69161 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:38:01.755348   69161 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0717 01:38:01.755379   69161 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:38:01.755393   69161 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:38:01.755441   69161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:01.794585   69161 cri.go:89] found id: ""
	I0717 01:38:01.794657   69161 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:38:01.811878   69161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:38:01.822275   69161 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:38:01.822297   69161 kubeadm.go:157] found existing configuration files:
	
	I0717 01:38:01.822349   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:38:01.832295   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:38:01.832361   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:38:01.841853   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:38:01.850743   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:38:01.850792   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:38:01.860061   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:38:01.869640   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:38:01.869695   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:38:01.879146   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:38:01.888664   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:38:01.888730   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:38:01.898051   69161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:38:01.907209   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:02.013763   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.064624   69161 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.050830101s)
	I0717 01:38:03.064658   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.281880   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.360185   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.475762   69161 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:38:03.475859   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:03.976869   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:04.476826   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:04.513612   69161 api_server.go:72] duration metric: took 1.03785049s to wait for apiserver process to appear ...
	I0717 01:38:04.513637   69161 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:38:04.513658   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:04.514182   69161 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0717 01:38:05.013987   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:07.606646   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:38:07.606681   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:38:07.606698   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:07.644623   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:38:07.644659   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:38:08.014209   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:08.018649   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:38:08.018675   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:38:08.513802   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:08.523658   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:38:08.523683   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:38:09.013997   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:09.018582   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0717 01:38:09.025524   69161 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:38:09.025556   69161 api_server.go:131] duration metric: took 4.511910476s to wait for apiserver health ...
	I0717 01:38:09.025567   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:38:09.025576   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:09.026854   69161 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:38:09.028050   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:38:09.054928   69161 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:38:09.099807   69161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:38:09.110763   69161 system_pods.go:59] 8 kube-system pods found
	I0717 01:38:09.110804   69161 system_pods.go:61] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:38:09.110817   69161 system_pods.go:61] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:38:09.110827   69161 system_pods.go:61] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:38:09.110835   69161 system_pods.go:61] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:38:09.110843   69161 system_pods.go:61] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:38:09.110852   69161 system_pods.go:61] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:38:09.110862   69161 system_pods.go:61] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:38:09.110872   69161 system_pods.go:61] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:38:09.110881   69161 system_pods.go:74] duration metric: took 11.048265ms to wait for pod list to return data ...
	I0717 01:38:09.110895   69161 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:38:09.115164   69161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:38:09.115185   69161 node_conditions.go:123] node cpu capacity is 2
	I0717 01:38:09.115195   69161 node_conditions.go:105] duration metric: took 4.295793ms to run NodePressure ...
	I0717 01:38:09.115222   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:09.380448   69161 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:38:09.385062   69161 kubeadm.go:739] kubelet initialised
	I0717 01:38:09.385081   69161 kubeadm.go:740] duration metric: took 4.609373ms waiting for restarted kubelet to initialise ...
	I0717 01:38:09.385089   69161 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:09.390128   69161 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.395089   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.395114   69161 pod_ready.go:81] duration metric: took 4.964286ms for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.395122   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.395130   69161 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.400466   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "etcd-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.400485   69161 pod_ready.go:81] duration metric: took 5.34752ms for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.400494   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "etcd-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.400502   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.406059   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-apiserver-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.406079   69161 pod_ready.go:81] duration metric: took 5.569824ms for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.406087   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-apiserver-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.406094   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.508478   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.508503   69161 pod_ready.go:81] duration metric: took 102.401908ms for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.508513   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.508521   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.903484   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-proxy-7xjgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.903507   69161 pod_ready.go:81] duration metric: took 394.977533ms for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.903516   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-proxy-7xjgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.903522   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:10.303374   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-scheduler-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.303400   69161 pod_ready.go:81] duration metric: took 399.87153ms for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:10.303410   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-scheduler-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.303417   69161 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:10.703844   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.703872   69161 pod_ready.go:81] duration metric: took 400.446731ms for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:10.703882   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.703890   69161 pod_ready.go:38] duration metric: took 1.31879349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:10.703906   69161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:38:10.716314   69161 ops.go:34] apiserver oom_adj: -16
	I0717 01:38:10.716330   69161 kubeadm.go:597] duration metric: took 8.985898425s to restartPrimaryControlPlane
	I0717 01:38:10.716338   69161 kubeadm.go:394] duration metric: took 9.0390568s to StartCluster
	I0717 01:38:10.716357   69161 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.716443   69161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:38:10.718239   69161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.718467   69161 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:38:10.718525   69161 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:38:10.718599   69161 addons.go:69] Setting storage-provisioner=true in profile "no-preload-818382"
	I0717 01:38:10.718615   69161 addons.go:69] Setting default-storageclass=true in profile "no-preload-818382"
	I0717 01:38:10.718632   69161 addons.go:234] Setting addon storage-provisioner=true in "no-preload-818382"
	W0717 01:38:10.718641   69161 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:38:10.718657   69161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-818382"
	I0717 01:38:10.718648   69161 addons.go:69] Setting metrics-server=true in profile "no-preload-818382"
	I0717 01:38:10.718669   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.718684   69161 addons.go:234] Setting addon metrics-server=true in "no-preload-818382"
	W0717 01:38:10.718694   69161 addons.go:243] addon metrics-server should already be in state true
	I0717 01:38:10.718710   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:38:10.718720   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.718995   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719013   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719033   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.719036   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719037   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.719062   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.720225   69161 out.go:177] * Verifying Kubernetes components...
	I0717 01:38:10.721645   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:10.735669   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0717 01:38:10.735668   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
	I0717 01:38:10.736213   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.736224   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.736697   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.736712   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.736749   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.736761   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.737065   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.737104   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.737517   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0717 01:38:10.737604   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.737623   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.737632   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.737643   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.737988   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.738548   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.738575   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.738916   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.739154   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.742601   69161 addons.go:234] Setting addon default-storageclass=true in "no-preload-818382"
	W0717 01:38:10.742621   69161 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:38:10.742649   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.742978   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.743000   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.753050   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I0717 01:38:10.761069   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.761760   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.761778   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.762198   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.762374   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.764056   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.766144   69161 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:38:10.767506   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:38:10.767527   69161 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:38:10.767546   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.770625   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.771141   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.771169   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.771354   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.771538   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.771797   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.771964   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.777232   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0717 01:38:10.777667   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.778207   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.778234   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.778629   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.778820   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.780129   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43699
	I0717 01:38:10.780526   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.780732   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.781258   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.781283   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.781642   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.782089   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.782134   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.782214   69161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:38:10.783466   69161 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:38:10.783484   69161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:38:10.783501   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.786557   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.786985   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.787006   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.787233   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.787393   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.787514   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.787610   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.798054   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0717 01:38:10.798498   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.798922   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.798942   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.799281   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.799452   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.801194   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.801413   69161 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:38:10.801428   69161 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:38:10.801444   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.804551   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.804963   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.804988   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.805103   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.805413   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.805564   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.805712   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.941843   69161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:10.962485   69161 node_ready.go:35] waiting up to 6m0s for node "no-preload-818382" to be "Ready" ...
	I0717 01:38:11.029564   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:38:11.047993   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:38:11.180628   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:38:11.180648   69161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:38:11.254864   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:38:11.254891   69161 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:38:11.322266   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:38:11.322290   69161 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:38:11.386819   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:38:12.107148   69161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.059119392s)
	I0717 01:38:12.107209   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107223   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107351   69161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077746478s)
	I0717 01:38:12.107396   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107407   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107523   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107542   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.107553   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107562   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107751   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.107766   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107780   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.107789   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107793   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.107798   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107824   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107831   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.108023   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.108056   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.108064   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.120981   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.121012   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.121920   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.121942   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.121958   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.192883   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.192908   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.193311   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.193357   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.193369   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.193378   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.193389   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.193656   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.193695   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.193704   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.193720   69161 addons.go:475] Verifying addon metrics-server=true in "no-preload-818382"
	I0717 01:38:12.196085   69161 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:38:12.197195   69161 addons.go:510] duration metric: took 1.478669603s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:38:12.968419   69161 node_ready.go:53] node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:15.466641   69161 node_ready.go:53] node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:17.966396   69161 node_ready.go:49] node "no-preload-818382" has status "Ready":"True"
	I0717 01:38:17.966419   69161 node_ready.go:38] duration metric: took 7.003900387s for node "no-preload-818382" to be "Ready" ...
	I0717 01:38:17.966428   69161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:17.972276   69161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:17.979661   69161 pod_ready.go:92] pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:17.979686   69161 pod_ready.go:81] duration metric: took 7.383414ms for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:17.979700   69161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:19.986664   69161 pod_ready.go:102] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:22.486306   69161 pod_ready.go:102] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:23.988340   69161 pod_ready.go:92] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.988366   69161 pod_ready.go:81] duration metric: took 6.008658778s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.988379   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.994341   69161 pod_ready.go:92] pod "kube-apiserver-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.994369   69161 pod_ready.go:81] duration metric: took 5.983444ms for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.994378   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.999839   69161 pod_ready.go:92] pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.999858   69161 pod_ready.go:81] duration metric: took 5.472052ms for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.999870   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.004359   69161 pod_ready.go:92] pod "kube-proxy-7xjgl" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:24.004376   69161 pod_ready.go:81] duration metric: took 4.499078ms for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.004388   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.008711   69161 pod_ready.go:92] pod "kube-scheduler-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:24.008728   69161 pod_ready.go:81] duration metric: took 4.333011ms for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.008738   69161 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:26.015816   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:28.515069   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:30.515823   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:33.015758   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:35.519125   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:38.015328   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:40.015434   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:42.016074   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:44.515165   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:46.515207   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:48.515526   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:51.015352   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:53.524771   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:55.525830   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:58.015294   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:00.016582   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:02.526596   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:05.017331   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:07.522994   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:10.015668   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:12.016581   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:14.514264   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:16.514483   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:18.514912   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:20.516805   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:23.017254   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:25.520744   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:27.525313   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:30.015300   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:32.515768   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:34.516472   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:37.015323   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:39.519189   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:41.519551   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:43.519612   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:46.015845   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:48.514995   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:51.015723   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:53.518041   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:56.016848   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:58.515231   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:01.014815   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:03.016104   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:05.515128   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:08.015053   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:10.515596   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:12.516108   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:15.016422   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:17.516656   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:20.023212   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:22.516829   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:25.015503   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:27.515818   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:29.516308   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:31.516354   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:34.014939   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:36.015491   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:38.515680   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:40.516729   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:43.015702   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:45.016597   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:47.516644   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:50.016083   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:52.016256   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:54.016658   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:56.019466   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:58.517513   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:01.015342   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:03.016255   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:05.017209   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:07.514660   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:09.515175   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:11.515986   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:14.016122   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:16.516248   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:19.016993   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:21.515181   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:23.515448   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:26.016226   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:28.516309   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:31.016068   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:33.516141   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:36.015057   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:38.015141   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:40.015943   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:42.515237   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:44.515403   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:46.516180   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:49.014892   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:51.019533   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:53.514629   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:55.515878   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:57.516813   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:00.016045   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:02.515848   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:05.017085   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:07.515218   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:10.016436   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:12.514412   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:14.515538   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:17.015473   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:19.516189   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:22.015149   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:24.015247   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:24.015279   69161 pod_ready.go:81] duration metric: took 4m0.006532152s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	E0717 01:42:24.015291   69161 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:42:24.015300   69161 pod_ready.go:38] duration metric: took 4m6.048863476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:24.015319   69161 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:42:24.015354   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:24.015412   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:24.070533   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:24.070555   69161 cri.go:89] found id: ""
	I0717 01:42:24.070564   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:24.070624   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.075767   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:24.075844   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:24.118412   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:24.118434   69161 cri.go:89] found id: ""
	I0717 01:42:24.118442   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:24.118491   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.123255   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:24.123323   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:24.159858   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:24.159880   69161 cri.go:89] found id: ""
	I0717 01:42:24.159887   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:24.159938   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.164261   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:24.164333   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:24.201402   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:24.201429   69161 cri.go:89] found id: ""
	I0717 01:42:24.201438   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:24.201490   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.206056   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:24.206112   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:24.241083   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:24.241109   69161 cri.go:89] found id: ""
	I0717 01:42:24.241119   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:24.241177   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.245739   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:24.245794   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:24.284369   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:24.284400   69161 cri.go:89] found id: ""
	I0717 01:42:24.284410   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:24.284473   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.290128   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:24.290184   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:24.328815   69161 cri.go:89] found id: ""
	I0717 01:42:24.328841   69161 logs.go:276] 0 containers: []
	W0717 01:42:24.328848   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:24.328854   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:24.328919   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:24.365591   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:24.365614   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:24.365621   69161 cri.go:89] found id: ""
	I0717 01:42:24.365630   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:24.365690   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.370614   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.375611   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:24.375641   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:24.392837   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:24.392872   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:24.443010   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:24.443036   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:24.482837   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:24.482870   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:24.536236   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:24.536262   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:24.576709   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:24.576740   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:24.625042   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:24.625069   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:24.679911   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:24.679945   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:24.721782   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:24.721809   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:24.775881   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:24.775916   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:24.917773   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:24.917806   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:24.962644   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:24.962673   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:25.002204   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:25.002242   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:28.032243   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:42:28.049580   69161 api_server.go:72] duration metric: took 4m17.331083879s to wait for apiserver process to appear ...
	I0717 01:42:28.049612   69161 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:42:28.049656   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:28.049717   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:28.088496   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:28.088519   69161 cri.go:89] found id: ""
	I0717 01:42:28.088527   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:28.088598   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.092659   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:28.092712   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:28.127205   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:28.127224   69161 cri.go:89] found id: ""
	I0717 01:42:28.127231   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:28.127276   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.131356   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:28.131425   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:28.166535   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:28.166556   69161 cri.go:89] found id: ""
	I0717 01:42:28.166564   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:28.166608   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.170576   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:28.170633   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:28.204842   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:28.204863   69161 cri.go:89] found id: ""
	I0717 01:42:28.204871   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:28.204924   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.208869   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:28.208922   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:28.241397   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:28.241414   69161 cri.go:89] found id: ""
	I0717 01:42:28.241421   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:28.241461   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.245569   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:28.245630   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:28.282072   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:28.282097   69161 cri.go:89] found id: ""
	I0717 01:42:28.282106   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:28.282159   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.286678   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:28.286738   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:28.320229   69161 cri.go:89] found id: ""
	I0717 01:42:28.320255   69161 logs.go:276] 0 containers: []
	W0717 01:42:28.320265   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:28.320271   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:28.320321   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:28.358955   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:28.358979   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:28.358985   69161 cri.go:89] found id: ""
	I0717 01:42:28.358992   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:28.359051   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.363407   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.367862   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:28.367886   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:28.405920   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:28.405948   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:28.442790   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:28.442814   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:28.507947   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:28.507977   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:28.543353   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:28.543375   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:28.591451   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:28.591484   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:29.046193   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:29.046234   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:29.093710   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:29.093743   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:29.132784   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:29.132811   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:29.148146   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:29.148176   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:29.250655   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:29.250682   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:29.295193   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:29.295222   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:29.330372   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:29.330404   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:31.882296   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:42:31.887420   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0717 01:42:31.889130   69161 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:42:31.889151   69161 api_server.go:131] duration metric: took 3.839533176s to wait for apiserver health ...
	I0717 01:42:31.889159   69161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:31.889180   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:31.889231   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:31.932339   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:31.932359   69161 cri.go:89] found id: ""
	I0717 01:42:31.932369   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:31.932428   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:31.936635   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:31.936694   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:31.973771   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:31.973797   69161 cri.go:89] found id: ""
	I0717 01:42:31.973805   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:31.973864   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:31.978328   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:31.978400   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:32.017561   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:32.017589   69161 cri.go:89] found id: ""
	I0717 01:42:32.017598   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:32.017652   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.021983   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:32.022043   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:32.060032   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:32.060058   69161 cri.go:89] found id: ""
	I0717 01:42:32.060067   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:32.060124   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.064390   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:32.064447   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:32.104292   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:32.104314   69161 cri.go:89] found id: ""
	I0717 01:42:32.104322   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:32.104378   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.108874   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:32.108939   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:32.151590   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:32.151611   69161 cri.go:89] found id: ""
	I0717 01:42:32.151619   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:32.151683   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.155683   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:32.155749   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:32.191197   69161 cri.go:89] found id: ""
	I0717 01:42:32.191224   69161 logs.go:276] 0 containers: []
	W0717 01:42:32.191235   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:32.191250   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:32.191315   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:32.228709   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:32.228729   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:32.228734   69161 cri.go:89] found id: ""
	I0717 01:42:32.228741   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:32.228825   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.234032   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.239566   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:32.239588   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:32.254327   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:32.254353   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:32.313682   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:32.313709   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:32.354250   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:32.354278   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:32.404452   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:32.404490   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:32.824059   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:32.824092   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:32.877614   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:32.877645   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:32.987728   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:32.987756   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:33.028146   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:33.028183   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:33.067880   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:33.067907   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:33.106837   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:33.106870   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:33.141500   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:33.141530   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:33.183960   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:33.183991   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:35.738491   69161 system_pods.go:59] 8 kube-system pods found
	I0717 01:42:35.738522   69161 system_pods.go:61] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running
	I0717 01:42:35.738526   69161 system_pods.go:61] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running
	I0717 01:42:35.738531   69161 system_pods.go:61] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running
	I0717 01:42:35.738536   69161 system_pods.go:61] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running
	I0717 01:42:35.738551   69161 system_pods.go:61] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running
	I0717 01:42:35.738558   69161 system_pods.go:61] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running
	I0717 01:42:35.738567   69161 system_pods.go:61] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:42:35.738573   69161 system_pods.go:61] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running
	I0717 01:42:35.738583   69161 system_pods.go:74] duration metric: took 3.849417383s to wait for pod list to return data ...
	I0717 01:42:35.738596   69161 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:42:35.741135   69161 default_sa.go:45] found service account: "default"
	I0717 01:42:35.741154   69161 default_sa.go:55] duration metric: took 2.55225ms for default service account to be created ...
	I0717 01:42:35.741160   69161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:42:35.745925   69161 system_pods.go:86] 8 kube-system pods found
	I0717 01:42:35.745944   69161 system_pods.go:89] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running
	I0717 01:42:35.745949   69161 system_pods.go:89] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running
	I0717 01:42:35.745953   69161 system_pods.go:89] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running
	I0717 01:42:35.745957   69161 system_pods.go:89] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running
	I0717 01:42:35.745961   69161 system_pods.go:89] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running
	I0717 01:42:35.745965   69161 system_pods.go:89] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running
	I0717 01:42:35.745971   69161 system_pods.go:89] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:42:35.745977   69161 system_pods.go:89] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running
	I0717 01:42:35.745986   69161 system_pods.go:126] duration metric: took 4.820763ms to wait for k8s-apps to be running ...
	I0717 01:42:35.745994   69161 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:42:35.746043   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:42:35.763979   69161 system_svc.go:56] duration metric: took 17.975443ms WaitForService to wait for kubelet
	I0717 01:42:35.764007   69161 kubeadm.go:582] duration metric: took 4m25.045517006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:42:35.764027   69161 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:42:35.768267   69161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:42:35.768297   69161 node_conditions.go:123] node cpu capacity is 2
	I0717 01:42:35.768312   69161 node_conditions.go:105] duration metric: took 4.280712ms to run NodePressure ...
	I0717 01:42:35.768337   69161 start.go:241] waiting for startup goroutines ...
	I0717 01:42:35.768347   69161 start.go:246] waiting for cluster config update ...
	I0717 01:42:35.768374   69161 start.go:255] writing updated cluster config ...
	I0717 01:42:35.768681   69161 ssh_runner.go:195] Run: rm -f paused
	I0717 01:42:35.817223   69161 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 01:42:35.819333   69161 out.go:177] * Done! kubectl is now configured to use "no-preload-818382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.804510200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180581804486766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=645d8f93-9354-483b-b763-2b75528629d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.805103727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5bef200-06c0-4b8a-9d0f-0f4c2ee1fe8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.805160029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5bef200-06c0-4b8a-9d0f-0f4c2ee1fe8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.805450418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24,PodSandboxId:ba758410f000d70c91659f1d2bbb68a0e3fe63e64842109b1f69bed7491f180c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038259652069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jbsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a95f33d-19ef-4b2e-a94e-08bbcaff92dc,},Annotations:map[string]string{io.kubernetes.container.hash: f840a0a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266,PodSandboxId:cb3af9dc3f7d686064e05ff60f65b46c1107e638e950de67fb4497b09d89be84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038200001329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mqjqg,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: ca27ce06-d171-4edd-9a1d-11898283f3ac,},Annotations:map[string]string{io.kubernetes.container.hash: f57320d7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979,PodSandboxId:b77504896dcb898c79f9b698b78a00617d8ee411aae6c3e439f2ab34dbca5aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721180038047568193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3352a0de-41db-4537-b87a-24137084aa7a,},Annotations:map[string]string{io.kubernetes.container.hash: f0fc49d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840,PodSandboxId:5382d0a57c5ce3f2ccee4bbc6a2b7a4e819f8153f4a76b6ffafcaa82d659abd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721180036827139635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55xmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6913d5-3362-4a9f-a159-1f9b1da7380a,},Annotations:map[string]string{io.kubernetes.container.hash: 19059592,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3,PodSandboxId:216ab51e933ccf4ccc8a6b0293eb3a238cd3be19d8fad316f5ba92e04752c843,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172118001739921388
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c34385125b125de5400fa3226cf2de,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51,PodSandboxId:93bfd1f14b71596774e7cc218037091329950961f324aab8b0be69ee68389b5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180017395566478,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a514fc142df0ab9cd96e7808cfb29643,},Annotations:map[string]string{io.kubernetes.container.hash: 84b4e281,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7,PodSandboxId:ef3005fd43bf3b843eb81891601a3e181ba6999fd67656e39963f8cf843482cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180017360782785,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 681b4df79913385a7df4408fb39c8722,},Annotations:map[string]string{io.kubernetes.container.hash: f56a7a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4,PodSandboxId:e92d1b4917088b309fb1351143fabcbaa5e6fbd652ccd2da0987ba1ee75e754c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180017304125969,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1b23caea4395fd53bf3e32d9165fe52,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5bef200-06c0-4b8a-9d0f-0f4c2ee1fe8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.847969994Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d73a6bb-9ba8-452b-97aa-28e6a21cc369 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.848040510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d73a6bb-9ba8-452b-97aa-28e6a21cc369 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.849635601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06c065cb-750c-4123-aaae-5c96d620320e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.850033389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180581850008927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06c065cb-750c-4123-aaae-5c96d620320e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.850592614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1f83702-8c60-4b6f-9f99-bb8d990ceb54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.850652013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1f83702-8c60-4b6f-9f99-bb8d990ceb54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.850835742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24,PodSandboxId:ba758410f000d70c91659f1d2bbb68a0e3fe63e64842109b1f69bed7491f180c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038259652069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jbsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a95f33d-19ef-4b2e-a94e-08bbcaff92dc,},Annotations:map[string]string{io.kubernetes.container.hash: f840a0a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266,PodSandboxId:cb3af9dc3f7d686064e05ff60f65b46c1107e638e950de67fb4497b09d89be84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038200001329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mqjqg,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: ca27ce06-d171-4edd-9a1d-11898283f3ac,},Annotations:map[string]string{io.kubernetes.container.hash: f57320d7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979,PodSandboxId:b77504896dcb898c79f9b698b78a00617d8ee411aae6c3e439f2ab34dbca5aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721180038047568193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3352a0de-41db-4537-b87a-24137084aa7a,},Annotations:map[string]string{io.kubernetes.container.hash: f0fc49d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840,PodSandboxId:5382d0a57c5ce3f2ccee4bbc6a2b7a4e819f8153f4a76b6ffafcaa82d659abd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721180036827139635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55xmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6913d5-3362-4a9f-a159-1f9b1da7380a,},Annotations:map[string]string{io.kubernetes.container.hash: 19059592,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3,PodSandboxId:216ab51e933ccf4ccc8a6b0293eb3a238cd3be19d8fad316f5ba92e04752c843,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172118001739921388
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c34385125b125de5400fa3226cf2de,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51,PodSandboxId:93bfd1f14b71596774e7cc218037091329950961f324aab8b0be69ee68389b5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180017395566478,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a514fc142df0ab9cd96e7808cfb29643,},Annotations:map[string]string{io.kubernetes.container.hash: 84b4e281,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7,PodSandboxId:ef3005fd43bf3b843eb81891601a3e181ba6999fd67656e39963f8cf843482cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180017360782785,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 681b4df79913385a7df4408fb39c8722,},Annotations:map[string]string{io.kubernetes.container.hash: f56a7a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4,PodSandboxId:e92d1b4917088b309fb1351143fabcbaa5e6fbd652ccd2da0987ba1ee75e754c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180017304125969,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1b23caea4395fd53bf3e32d9165fe52,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1f83702-8c60-4b6f-9f99-bb8d990ceb54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.888459208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03809eff-e90c-4e95-b3af-2da27c5c6260 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.888527235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03809eff-e90c-4e95-b3af-2da27c5c6260 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.889543311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79bc029c-47e2-4513-b5ed-30e144d58ee9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.889962824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180581889939118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79bc029c-47e2-4513-b5ed-30e144d58ee9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.890566828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edaa4e72-cf01-4253-b22b-9091c39facda name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.890619554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edaa4e72-cf01-4253-b22b-9091c39facda name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.890802157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24,PodSandboxId:ba758410f000d70c91659f1d2bbb68a0e3fe63e64842109b1f69bed7491f180c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038259652069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jbsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a95f33d-19ef-4b2e-a94e-08bbcaff92dc,},Annotations:map[string]string{io.kubernetes.container.hash: f840a0a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266,PodSandboxId:cb3af9dc3f7d686064e05ff60f65b46c1107e638e950de67fb4497b09d89be84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038200001329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mqjqg,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: ca27ce06-d171-4edd-9a1d-11898283f3ac,},Annotations:map[string]string{io.kubernetes.container.hash: f57320d7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979,PodSandboxId:b77504896dcb898c79f9b698b78a00617d8ee411aae6c3e439f2ab34dbca5aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721180038047568193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3352a0de-41db-4537-b87a-24137084aa7a,},Annotations:map[string]string{io.kubernetes.container.hash: f0fc49d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840,PodSandboxId:5382d0a57c5ce3f2ccee4bbc6a2b7a4e819f8153f4a76b6ffafcaa82d659abd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721180036827139635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55xmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6913d5-3362-4a9f-a159-1f9b1da7380a,},Annotations:map[string]string{io.kubernetes.container.hash: 19059592,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3,PodSandboxId:216ab51e933ccf4ccc8a6b0293eb3a238cd3be19d8fad316f5ba92e04752c843,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172118001739921388
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c34385125b125de5400fa3226cf2de,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51,PodSandboxId:93bfd1f14b71596774e7cc218037091329950961f324aab8b0be69ee68389b5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180017395566478,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a514fc142df0ab9cd96e7808cfb29643,},Annotations:map[string]string{io.kubernetes.container.hash: 84b4e281,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7,PodSandboxId:ef3005fd43bf3b843eb81891601a3e181ba6999fd67656e39963f8cf843482cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180017360782785,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 681b4df79913385a7df4408fb39c8722,},Annotations:map[string]string{io.kubernetes.container.hash: f56a7a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4,PodSandboxId:e92d1b4917088b309fb1351143fabcbaa5e6fbd652ccd2da0987ba1ee75e754c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180017304125969,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1b23caea4395fd53bf3e32d9165fe52,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edaa4e72-cf01-4253-b22b-9091c39facda name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.925367004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b3bcb72-d640-4b8a-ab27-4bcea5ca8f6c name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.925440197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b3bcb72-d640-4b8a-ab27-4bcea5ca8f6c name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.926678329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79681086-40f5-46b3-a21b-c1dc466349da name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.927072794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180581927051031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79681086-40f5-46b3-a21b-c1dc466349da name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.927590748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40e5f97b-0f7f-4adb-ad33-da56aae6f0e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.927756471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40e5f97b-0f7f-4adb-ad33-da56aae6f0e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:01 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:43:01.927938709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24,PodSandboxId:ba758410f000d70c91659f1d2bbb68a0e3fe63e64842109b1f69bed7491f180c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038259652069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jbsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a95f33d-19ef-4b2e-a94e-08bbcaff92dc,},Annotations:map[string]string{io.kubernetes.container.hash: f840a0a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266,PodSandboxId:cb3af9dc3f7d686064e05ff60f65b46c1107e638e950de67fb4497b09d89be84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038200001329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mqjqg,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: ca27ce06-d171-4edd-9a1d-11898283f3ac,},Annotations:map[string]string{io.kubernetes.container.hash: f57320d7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979,PodSandboxId:b77504896dcb898c79f9b698b78a00617d8ee411aae6c3e439f2ab34dbca5aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721180038047568193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3352a0de-41db-4537-b87a-24137084aa7a,},Annotations:map[string]string{io.kubernetes.container.hash: f0fc49d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840,PodSandboxId:5382d0a57c5ce3f2ccee4bbc6a2b7a4e819f8153f4a76b6ffafcaa82d659abd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721180036827139635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55xmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6913d5-3362-4a9f-a159-1f9b1da7380a,},Annotations:map[string]string{io.kubernetes.container.hash: 19059592,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3,PodSandboxId:216ab51e933ccf4ccc8a6b0293eb3a238cd3be19d8fad316f5ba92e04752c843,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172118001739921388
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c34385125b125de5400fa3226cf2de,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51,PodSandboxId:93bfd1f14b71596774e7cc218037091329950961f324aab8b0be69ee68389b5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180017395566478,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a514fc142df0ab9cd96e7808cfb29643,},Annotations:map[string]string{io.kubernetes.container.hash: 84b4e281,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7,PodSandboxId:ef3005fd43bf3b843eb81891601a3e181ba6999fd67656e39963f8cf843482cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180017360782785,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 681b4df79913385a7df4408fb39c8722,},Annotations:map[string]string{io.kubernetes.container.hash: f56a7a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4,PodSandboxId:e92d1b4917088b309fb1351143fabcbaa5e6fbd652ccd2da0987ba1ee75e754c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180017304125969,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1b23caea4395fd53bf3e32d9165fe52,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40e5f97b-0f7f-4adb-ad33-da56aae6f0e1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7fef3a9397e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   ba758410f000d       coredns-7db6d8ff4d-jbsq5
	5eed5cd4d1e24       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   cb3af9dc3f7d6       coredns-7db6d8ff4d-mqjqg
	8428dd4b31f40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b77504896dcb8       storage-provisioner
	bda36ad068bc8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   9 minutes ago       Running             kube-proxy                0                   5382d0a57c5ce       kube-proxy-55xmv
	bd3be8a32004f       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   9 minutes ago       Running             kube-scheduler            2                   216ab51e933cc       kube-scheduler-default-k8s-diff-port-945694
	3d32ff42339e9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   93bfd1f14b715       etcd-default-k8s-diff-port-945694
	967ef369f3c41       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   9 minutes ago       Running             kube-apiserver            2                   ef3005fd43bf3       kube-apiserver-default-k8s-diff-port-945694
	fb5d4443945dc       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   9 minutes ago       Running             kube-controller-manager   2                   e92d1b4917088       kube-controller-manager-default-k8s-diff-port-945694
	
	
	==> coredns [5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-945694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-945694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=default-k8s-diff-port-945694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_33_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:33:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-945694
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:39:09 +0000   Wed, 17 Jul 2024 01:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:39:09 +0000   Wed, 17 Jul 2024 01:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:39:09 +0000   Wed, 17 Jul 2024 01:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:39:09 +0000   Wed, 17 Jul 2024 01:33:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.30
	  Hostname:    default-k8s-diff-port-945694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4fc2ef93f4e4d689fe3de0aecd1906b
	  System UUID:                d4fc2ef9-3f4e-4d68-9fe3-de0aecd1906b
	  Boot ID:                    704973c4-4314-43a4-b18d-29cc02696ddd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jbsq5                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-mqjqg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-default-k8s-diff-port-945694                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-945694             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-945694    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-55xmv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-default-k8s-diff-port-945694             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-4nffv                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s (x2 over 9m20s)  kubelet          Node default-k8s-diff-port-945694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s (x2 over 9m20s)  kubelet          Node default-k8s-diff-port-945694 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s (x2 over 9m20s)  kubelet          Node default-k8s-diff-port-945694 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m7s                   node-controller  Node default-k8s-diff-port-945694 event: Registered Node default-k8s-diff-port-945694 in Controller
	
	
	==> dmesg <==
	[  +0.051861] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041147] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.524530] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.322871] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579063] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.029292] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.062216] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072548] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.192016] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.136973] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.310457] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +4.725715] systemd-fstab-generator[797]: Ignoring "noauto" option for root device
	[  +0.063298] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.948258] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[  +5.569174] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.196935] kauditd_printk_skb: 84 callbacks suppressed
	[Jul17 01:33] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.491839] systemd-fstab-generator[3607]: Ignoring "noauto" option for root device
	[  +4.969549] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.583695] systemd-fstab-generator[3932]: Ignoring "noauto" option for root device
	[ +14.378969] systemd-fstab-generator[4156]: Ignoring "noauto" option for root device
	[  +0.015283] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 01:35] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51] <==
	{"level":"info","ts":"2024-07-17T01:33:37.802708Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:33:37.802913Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"21545a69824e3d79","initial-advertise-peer-urls":["https://192.168.50.30:2380"],"listen-peer-urls":["https://192.168.50.30:2380"],"advertise-client-urls":["https://192.168.50.30:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.30:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:33:37.802952Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:33:37.803052Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.30:2380"}
	{"level":"info","ts":"2024-07-17T01:33:37.803081Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.30:2380"}
	{"level":"info","ts":"2024-07-17T01:33:38.312633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T01:33:38.312687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T01:33:38.312721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 received MsgPreVoteResp from 21545a69824e3d79 at term 1"}
	{"level":"info","ts":"2024-07-17T01:33:38.312734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:33:38.312739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 received MsgVoteResp from 21545a69824e3d79 at term 2"}
	{"level":"info","ts":"2024-07-17T01:33:38.312754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21545a69824e3d79 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T01:33:38.312765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 21545a69824e3d79 elected leader 21545a69824e3d79 at term 2"}
	{"level":"info","ts":"2024-07-17T01:33:38.317338Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:38.319525Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"21545a69824e3d79","local-member-attributes":"{Name:default-k8s-diff-port-945694 ClientURLs:[https://192.168.50.30:2379]}","request-path":"/0/members/21545a69824e3d79/attributes","cluster-id":"4c46e38203538bcd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:33:38.319669Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:33:38.320028Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4c46e38203538bcd","local-member-id":"21545a69824e3d79","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:38.320112Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:38.32015Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:38.320226Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:33:38.320255Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:33:38.320263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:33:38.325794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.30:2379"}
	{"level":"info","ts":"2024-07-17T01:33:38.330253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/07/17 01:33:42 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-17T01:37:59.753586Z","caller":"traceutil/trace.go:171","msg":"trace[1848584702] transaction","detail":"{read_only:false; response_revision:651; number_of_response:1; }","duration":"133.751838ms","start":"2024-07-17T01:37:59.619787Z","end":"2024-07-17T01:37:59.753539Z","steps":["trace[1848584702] 'process raft request'  (duration: 133.437916ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:43:02 up 14 min,  0 users,  load average: 0.13, 0.15, 0.10
	Linux default-k8s-diff-port-945694 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7] <==
	I0717 01:36:58.689486       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:38:40.053882       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:38:40.054108       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 01:38:41.054441       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:38:41.054563       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:38:41.054597       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:38:41.054454       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:38:41.054699       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:38:41.055890       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:39:41.055661       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:39:41.055923       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:39:41.055961       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:39:41.056135       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:39:41.056227       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:39:41.057998       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:41:41.057053       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:41:41.057384       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:41:41.057425       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:41:41.058343       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:41:41.058424       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:41:41.058447       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4] <==
	I0717 01:37:26.011916       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:37:55.562397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:37:56.027866       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:38:25.568537       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:38:26.038032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:38:55.573939       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:38:56.045638       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:39:25.580981       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:39:26.054948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:39:44.629628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="326.025µs"
	E0717 01:39:55.586994       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:39:56.070404       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:39:56.625675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="119.286µs"
	E0717 01:40:25.594952       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:40:26.079019       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:40:55.600358       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:40:56.086838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:41:25.605405       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:41:26.098085       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:41:55.611250       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:41:56.107031       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:42:25.617152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:42:26.114988       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:42:55.623026       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:42:56.122686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840] <==
	I0717 01:33:57.036034       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:33:57.053473       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.30"]
	I0717 01:33:57.134694       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:33:57.134752       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:33:57.134769       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:33:57.137308       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:33:57.137483       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:33:57.137494       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:33:57.138820       1 config.go:192] "Starting service config controller"
	I0717 01:33:57.138847       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:33:57.138878       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:33:57.138882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:33:57.139641       1 config.go:319] "Starting node config controller"
	I0717 01:33:57.139649       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:33:57.238954       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:33:57.239074       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:33:57.240680       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3] <==
	W0717 01:33:40.081694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 01:33:40.081940       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 01:33:40.081781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 01:33:40.082007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 01:33:40.081835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:33:40.082066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:33:40.081845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:33:40.082126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:33:40.902644       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:33:40.902673       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:33:41.019221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 01:33:41.019310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 01:33:41.059088       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 01:33:41.059211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 01:33:41.109485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:33:41.109684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:33:41.116801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 01:33:41.116911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 01:33:41.148008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:33:41.148096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:33:41.182541       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:33:41.182597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:33:41.244663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:33:41.244747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 01:33:43.874907       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:40:42 default-k8s-diff-port-945694 kubelet[3939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:40:42 default-k8s-diff-port-945694 kubelet[3939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:40:42 default-k8s-diff-port-945694 kubelet[3939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:40:42 default-k8s-diff-port-945694 kubelet[3939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:40:53 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:40:53.606909    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:41:04 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:41:04.607529    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:41:16 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:41:16.607326    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:41:29 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:41:29.606849    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:41:40 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:41:40.608410    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:41:42 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:41:42.642383    3939 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:41:42 default-k8s-diff-port-945694 kubelet[3939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:41:42 default-k8s-diff-port-945694 kubelet[3939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:41:42 default-k8s-diff-port-945694 kubelet[3939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:41:42 default-k8s-diff-port-945694 kubelet[3939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:41:54 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:41:54.606940    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:42:06 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:42:06.606748    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:42:21 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:42:21.607105    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:42:34 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:42:34.606937    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:42:42 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:42:42.643884    3939 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:42:42 default-k8s-diff-port-945694 kubelet[3939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:42:42 default-k8s-diff-port-945694 kubelet[3939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:42:42 default-k8s-diff-port-945694 kubelet[3939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:42:42 default-k8s-diff-port-945694 kubelet[3939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:42:47 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:42:47.606510    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:43:00 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:43:00.606277    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	
	
	==> storage-provisioner [8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979] <==
	I0717 01:33:58.290237       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:33:58.306338       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:33:58.306374       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:33:58.323096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:33:58.323942       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-945694_5ebc6471-a584-4320-90d4-35b93d89aaed!
	I0717 01:33:58.349702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e52588b-4b2b-4822-901e-6e471a9db2a8", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-945694_5ebc6471-a584-4320-90d4-35b93d89aaed became leader
	I0717 01:33:58.428326       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-945694_5ebc6471-a584-4320-90d4-35b93d89aaed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-945694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-4nffv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-945694 describe pod metrics-server-569cc877fc-4nffv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-945694 describe pod metrics-server-569cc877fc-4nffv: exit status 1 (62.290346ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-4nffv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-945694 describe pod metrics-server-569cc877fc-4nffv: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (312.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
E0717 01:39:18.739301   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
E0717 01:40:15.503082   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
E0717 01:42:12.451679   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.13:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.13:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 2 (227.845968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-249342" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-249342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-249342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.123µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-249342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 2 (228.551956ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-249342 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-621535                              | stopped-upgrade-621535       | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:19 UTC |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:19 UTC | 17 Jul 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729236                           | kubernetes-upgrade-729236    | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-249342             | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-249342                              | old-k8s-version-249342       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-261470                              | running-upgrade-261470       | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:20 UTC |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:20 UTC | 17 Jul 24 01:22 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-484167            | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC | 17 Jul 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-945694  | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC | 17 Jul 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-484167                 | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-484167                                  | embed-certs-484167           | jenkins | v1.33.1 | 17 Jul 24 01:23 UTC | 17 Jul 24 01:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-945694       | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-945694 | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:34 UTC |
	|         | default-k8s-diff-port-945694                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-838524                              | cert-expiration-838524       | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:28 UTC |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:30 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-818382             | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:30 UTC | 17 Jul 24 01:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-818382                                   | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-818382                  | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-818382 --memory=2200                     | no-preload-818382            | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC | 17 Jul 24 01:42 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:32:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:32:43.547613   69161 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:32:43.547856   69161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:43.547865   69161 out.go:304] Setting ErrFile to fd 2...
	I0717 01:32:43.547869   69161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:43.548058   69161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:32:43.548591   69161 out.go:298] Setting JSON to false
	I0717 01:32:43.549476   69161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8113,"bootTime":1721171851,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:32:43.549531   69161 start.go:139] virtualization: kvm guest
	I0717 01:32:43.551667   69161 out.go:177] * [no-preload-818382] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:32:43.552978   69161 notify.go:220] Checking for updates...
	I0717 01:32:43.553027   69161 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:32:43.554498   69161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:32:43.555767   69161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:32:43.557080   69161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:32:43.558402   69161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:32:43.559566   69161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:32:43.561137   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:32:43.561542   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.561591   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.576810   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0717 01:32:43.577217   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.577724   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.577746   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.578068   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.578246   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.578474   69161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:32:43.578722   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.578751   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.593634   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0717 01:32:43.594007   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.594435   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.594460   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.594810   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.594984   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.632126   69161 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:32:43.633290   69161 start.go:297] selected driver: kvm2
	I0717 01:32:43.633305   69161 start.go:901] validating driver "kvm2" against &{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:43.633393   69161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:32:43.634018   69161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.634085   69161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:32:43.648838   69161 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:32:43.649342   69161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:32:43.649377   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:32:43.649388   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:32:43.649454   69161 start.go:340] cluster config:
	{Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:43.649575   69161 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.651213   69161 out.go:177] * Starting "no-preload-818382" primary control-plane node in "no-preload-818382" cluster
	I0717 01:32:43.652698   69161 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:32:43.652866   69161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:32:43.652971   69161 cache.go:107] acquiring lock: {Name:mk0dda4d4cdd92722b746ab931e6544cfc8daee5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.652980   69161 cache.go:107] acquiring lock: {Name:mk1de3a52aa61e3b4e847379240ac3935bedb199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653004   69161 cache.go:107] acquiring lock: {Name:mkf6e5b69e84ed3f384772a188b9364b7e3d5b5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653072   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 01:32:43.653091   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0717 01:32:43.653102   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0717 01:32:43.653107   69161 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 146.502µs
	I0717 01:32:43.653119   69161 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653117   69161 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 121.37µs
	I0717 01:32:43.653137   69161 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653098   69161 cache.go:107] acquiring lock: {Name:mkf2f11535addf893c2faa84c376231e8d922e64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653127   69161 cache.go:107] acquiring lock: {Name:mk0f717937d10c133c40dfa3d731090d6e186c8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653157   69161 cache.go:107] acquiring lock: {Name:mkddaaee919763be73bfba0c581555b8cc97a67b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653143   69161 cache.go:107] acquiring lock: {Name:mkecaf352dd381368806d2a149fd31f0c349a680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653184   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 exists
	I0717 01:32:43.653170   69161 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:32:43.653201   69161 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0" took 76.404µs
	I0717 01:32:43.653211   69161 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0717 01:32:43.653256   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0717 01:32:43.653259   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0717 01:32:43.653270   69161 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 131.092µs
	I0717 01:32:43.653278   69161 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653278   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0717 01:32:43.653273   69161 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 220.448µs
	I0717 01:32:43.653293   69161 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0717 01:32:43.653292   69161 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 138.342µs
	I0717 01:32:43.653303   69161 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0717 01:32:43.653142   69161 cache.go:107] acquiring lock: {Name:mk2ca5e82f37242a4f02d1776db6559bdb43421e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:43.653316   69161 start.go:364] duration metric: took 84.706µs to acquireMachinesLock for "no-preload-818382"
	I0717 01:32:43.653101   69161 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 132.422µs
	I0717 01:32:43.653358   69161 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:32:43.653360   69161 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 01:32:43.653365   69161 fix.go:54] fixHost starting: 
	I0717 01:32:43.653345   69161 cache.go:115] /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0717 01:32:43.653380   69161 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 247.182µs
	I0717 01:32:43.653397   69161 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0717 01:32:43.653413   69161 cache.go:87] Successfully saved all images to host disk.
	I0717 01:32:43.653791   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:32:43.653851   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:43.669140   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0717 01:32:43.669544   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:43.669975   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:32:43.669995   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:43.670285   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:43.670451   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.670597   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:32:43.672083   69161 fix.go:112] recreateIfNeeded on no-preload-818382: state=Running err=<nil>
	W0717 01:32:43.672118   69161 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:32:43.674037   69161 out.go:177] * Updating the running kvm2 "no-preload-818382" VM ...
	I0717 01:32:40.312635   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:42.810125   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:44.006444   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:46.006933   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:43.675220   69161 machine.go:94] provisionDockerMachine start ...
	I0717 01:32:43.675236   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:32:43.675410   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:32:43.677780   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:32:43.678159   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:29:11 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:32:43.678194   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:32:43.678285   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:32:43.678480   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:32:43.678635   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:32:43.678751   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:32:43.678900   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:43.679072   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:32:43.679082   69161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:32:46.576890   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:44.811604   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:47.310107   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:49.310610   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:48.007526   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:50.506280   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:49.648813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:51.310765   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:53.810052   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:53.007282   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:55.506679   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:57.506743   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:55.728954   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:55.810343   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:57.810539   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:00.007367   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.509717   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:32:58.800813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:32:59.810958   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.310473   66659 pod_ready.go:102] pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:02.804718   66659 pod_ready.go:81] duration metric: took 4m0.000441849s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:02.804758   66659 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wmss9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 01:33:02.804776   66659 pod_ready.go:38] duration metric: took 4m11.542416864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:02.804800   66659 kubeadm.go:597] duration metric: took 4m19.055059195s to restartPrimaryControlPlane
	W0717 01:33:02.804851   66659 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 01:33:02.804875   66659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:33:05.008344   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:07.008631   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:04.880862   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:07.956811   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:09.506709   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:12.007454   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:14.007849   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:16.506348   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:17.072888   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:19.005817   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:21.006641   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:20.144862   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:23.007827   66178 pod_ready.go:102] pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace has status "Ready":"False"
	I0717 01:33:24.506621   66178 pod_ready.go:81] duration metric: took 4m0.006337956s for pod "metrics-server-569cc877fc-2qwf6" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:24.506648   66178 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:33:24.506656   66178 pod_ready.go:38] duration metric: took 4m4.541684979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:24.506672   66178 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:24.506700   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:24.506752   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:24.553972   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:24.553994   66178 cri.go:89] found id: ""
	I0717 01:33:24.554003   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:24.554067   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.558329   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:24.558382   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:24.593681   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:24.593710   66178 cri.go:89] found id: ""
	I0717 01:33:24.593717   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:24.593764   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.598462   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:24.598521   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:24.638597   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:24.638617   66178 cri.go:89] found id: ""
	I0717 01:33:24.638624   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:24.638674   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.642611   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:24.642674   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:24.678207   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:24.678227   66178 cri.go:89] found id: ""
	I0717 01:33:24.678233   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:24.678284   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.682820   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:24.682884   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:24.724141   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:24.724170   66178 cri.go:89] found id: ""
	I0717 01:33:24.724179   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:24.724231   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.729301   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:24.729355   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:24.765894   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:24.765916   66178 cri.go:89] found id: ""
	I0717 01:33:24.765925   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:24.765970   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.770898   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:24.770951   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:24.805812   66178 cri.go:89] found id: ""
	I0717 01:33:24.805835   66178 logs.go:276] 0 containers: []
	W0717 01:33:24.805842   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:24.805848   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:24.805897   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:24.847766   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:24.847788   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:24.847794   66178 cri.go:89] found id: ""
	I0717 01:33:24.847802   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:24.847852   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.852045   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:24.856136   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:24.856161   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:24.892801   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:24.892829   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:24.944203   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:24.944236   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:25.482400   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:25.482440   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:25.544150   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:25.544190   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:25.559587   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:25.559620   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:25.679463   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:25.679488   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:25.725117   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:25.725144   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:25.771390   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:25.771417   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:25.818766   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:25.818792   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:25.861973   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:25.862008   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:25.899694   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:25.899723   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:25.937573   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:25.937604   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:26.224800   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:28.476050   66178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:28.491506   66178 api_server.go:72] duration metric: took 4m14.298590069s to wait for apiserver process to appear ...
	I0717 01:33:28.491527   66178 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:28.491568   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:28.491626   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:28.526854   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:28.526882   66178 cri.go:89] found id: ""
	I0717 01:33:28.526891   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:28.526957   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.531219   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:28.531282   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:28.567901   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:28.567927   66178 cri.go:89] found id: ""
	I0717 01:33:28.567937   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:28.567995   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.572030   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:28.572094   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:28.606586   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:28.606610   66178 cri.go:89] found id: ""
	I0717 01:33:28.606622   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:28.606679   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.611494   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:28.611555   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:28.647224   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:28.647247   66178 cri.go:89] found id: ""
	I0717 01:33:28.647255   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:28.647311   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.651314   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:28.651376   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:28.686387   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:28.686412   66178 cri.go:89] found id: ""
	I0717 01:33:28.686420   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:28.686473   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.691061   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:28.691128   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:28.728066   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:28.728091   66178 cri.go:89] found id: ""
	I0717 01:33:28.728099   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:28.728147   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.732397   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:28.732446   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:28.770233   66178 cri.go:89] found id: ""
	I0717 01:33:28.770261   66178 logs.go:276] 0 containers: []
	W0717 01:33:28.770270   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:28.770277   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:28.770338   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:28.806271   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:28.806296   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:28.806302   66178 cri.go:89] found id: ""
	I0717 01:33:28.806311   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:28.806371   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.810691   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:28.814958   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:28.814976   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:28.856685   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:28.856712   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:28.897748   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:28.897790   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:28.958202   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:28.958228   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:28.999474   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:28.999501   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:29.035726   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:29.035758   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:29.072498   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:29.072524   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:29.110199   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:29.110226   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:29.144474   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:29.144506   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:29.196286   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:29.196315   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:29.210251   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:29.210274   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:29.313845   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:29.313877   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:29.748683   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:29.748719   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:32.292005   66178 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8443/healthz ...
	I0717 01:33:32.296375   66178 api_server.go:279] https://192.168.72.48:8443/healthz returned 200:
	ok
	I0717 01:33:32.297480   66178 api_server.go:141] control plane version: v1.30.2
	I0717 01:33:32.297499   66178 api_server.go:131] duration metric: took 3.805966225s to wait for apiserver health ...
	I0717 01:33:32.297507   66178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:32.297528   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:33:32.297569   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:33:32.336526   66178 cri.go:89] found id: "d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:32.336566   66178 cri.go:89] found id: ""
	I0717 01:33:32.336576   66178 logs.go:276] 1 containers: [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026]
	I0717 01:33:32.336629   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.340838   66178 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:33:32.340904   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:33:32.375827   66178 cri.go:89] found id: "980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:32.375853   66178 cri.go:89] found id: ""
	I0717 01:33:32.375862   66178 logs.go:276] 1 containers: [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c]
	I0717 01:33:32.375920   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.380212   66178 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:33:32.380269   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:33:32.417036   66178 cri.go:89] found id: "370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:32.417063   66178 cri.go:89] found id: ""
	I0717 01:33:32.417075   66178 logs.go:276] 1 containers: [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187]
	I0717 01:33:32.417140   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.421437   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:33:32.421507   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:33:32.455708   66178 cri.go:89] found id: "98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:32.455732   66178 cri.go:89] found id: ""
	I0717 01:33:32.455741   66178 logs.go:276] 1 containers: [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802]
	I0717 01:33:32.455799   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.464218   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:33:32.464299   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:33:32.506931   66178 cri.go:89] found id: "2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:32.506958   66178 cri.go:89] found id: ""
	I0717 01:33:32.506968   66178 logs.go:276] 1 containers: [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364]
	I0717 01:33:32.507030   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.511493   66178 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:33:32.511562   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:33:32.554706   66178 cri.go:89] found id: "b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:32.554731   66178 cri.go:89] found id: ""
	I0717 01:33:32.554741   66178 logs.go:276] 1 containers: [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c]
	I0717 01:33:32.554806   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.559101   66178 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:33:32.559175   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:33:32.598078   66178 cri.go:89] found id: ""
	I0717 01:33:32.598113   66178 logs.go:276] 0 containers: []
	W0717 01:33:32.598126   66178 logs.go:278] No container was found matching "kindnet"
	I0717 01:33:32.598135   66178 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:33:32.598209   66178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:33:29.300812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:34.426424   66659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.621528106s)
	I0717 01:33:34.426506   66659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:34.441446   66659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:33:34.451230   66659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:33:34.460682   66659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:33:34.460702   66659 kubeadm.go:157] found existing configuration files:
	
	I0717 01:33:34.460746   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:33:34.469447   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:33:34.469496   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:33:34.478412   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:33:34.487047   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:33:34.487096   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:33:34.496243   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:33:34.504852   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:33:34.504907   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:33:34.513592   66659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:33:34.521997   66659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:33:34.522048   66659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:33:34.530773   66659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:33:32.639086   66178 cri.go:89] found id: "a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:32.639113   66178 cri.go:89] found id: "dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:32.639119   66178 cri.go:89] found id: ""
	I0717 01:33:32.639127   66178 logs.go:276] 2 containers: [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272]
	I0717 01:33:32.639185   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.643404   66178 ssh_runner.go:195] Run: which crictl
	I0717 01:33:32.648144   66178 logs.go:123] Gathering logs for kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] ...
	I0717 01:33:32.648165   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c"
	I0717 01:33:32.700179   66178 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:33:32.700212   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:33:33.091798   66178 logs.go:123] Gathering logs for container status ...
	I0717 01:33:33.091840   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:33:33.142057   66178 logs.go:123] Gathering logs for kubelet ...
	I0717 01:33:33.142095   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:33:33.197532   66178 logs.go:123] Gathering logs for kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] ...
	I0717 01:33:33.197567   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026"
	I0717 01:33:33.248356   66178 logs.go:123] Gathering logs for etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] ...
	I0717 01:33:33.248393   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c"
	I0717 01:33:33.290624   66178 logs.go:123] Gathering logs for coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] ...
	I0717 01:33:33.290652   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187"
	I0717 01:33:33.338525   66178 logs.go:123] Gathering logs for kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] ...
	I0717 01:33:33.338557   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364"
	I0717 01:33:33.379963   66178 logs.go:123] Gathering logs for dmesg ...
	I0717 01:33:33.379998   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:33:33.393448   66178 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:33:33.393472   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:33:33.497330   66178 logs.go:123] Gathering logs for kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] ...
	I0717 01:33:33.497366   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802"
	I0717 01:33:33.534015   66178 logs.go:123] Gathering logs for storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] ...
	I0717 01:33:33.534048   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185"
	I0717 01:33:33.569753   66178 logs.go:123] Gathering logs for storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] ...
	I0717 01:33:33.569779   66178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272"
	I0717 01:33:36.112668   66178 system_pods.go:59] 8 kube-system pods found
	I0717 01:33:36.112698   66178 system_pods.go:61] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running
	I0717 01:33:36.112704   66178 system_pods.go:61] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running
	I0717 01:33:36.112710   66178 system_pods.go:61] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running
	I0717 01:33:36.112716   66178 system_pods.go:61] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running
	I0717 01:33:36.112721   66178 system_pods.go:61] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:33:36.112726   66178 system_pods.go:61] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running
	I0717 01:33:36.112734   66178 system_pods.go:61] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:36.112741   66178 system_pods.go:61] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:33:36.112752   66178 system_pods.go:74] duration metric: took 3.81523968s to wait for pod list to return data ...
	I0717 01:33:36.112760   66178 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:33:36.114860   66178 default_sa.go:45] found service account: "default"
	I0717 01:33:36.114880   66178 default_sa.go:55] duration metric: took 2.115012ms for default service account to be created ...
	I0717 01:33:36.114888   66178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:33:36.119333   66178 system_pods.go:86] 8 kube-system pods found
	I0717 01:33:36.119357   66178 system_pods.go:89] "coredns-7db6d8ff4d-z4qpz" [43aa103c-9e70-4fb1-8607-321b6904a218] Running
	I0717 01:33:36.119363   66178 system_pods.go:89] "etcd-embed-certs-484167" [55918032-05ab-4a5b-951c-c8d4a063751e] Running
	I0717 01:33:36.119368   66178 system_pods.go:89] "kube-apiserver-embed-certs-484167" [39facb47-77a1-4eb7-9c7e-795b35adb238] Running
	I0717 01:33:36.119372   66178 system_pods.go:89] "kube-controller-manager-embed-certs-484167" [270c8cb6-2fdd-4cec-9692-ecc2950ce3b2] Running
	I0717 01:33:36.119376   66178 system_pods.go:89] "kube-proxy-gq7qg" [ac9a0ae4-28e0-4900-a39b-f7a0eba7cc06] Running
	I0717 01:33:36.119382   66178 system_pods.go:89] "kube-scheduler-embed-certs-484167" [e9ea6022-e399-42a3-b8c9-a09a57aa8126] Running
	I0717 01:33:36.119392   66178 system_pods.go:89] "metrics-server-569cc877fc-2qwf6" [caefc20d-d993-46cb-b815-e4ae30ce4e85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:36.119401   66178 system_pods.go:89] "storage-provisioner" [620df9ee-45a9-4b04-a21c-0ddc878375ca] Running
	I0717 01:33:36.119410   66178 system_pods.go:126] duration metric: took 4.516516ms to wait for k8s-apps to be running ...
	I0717 01:33:36.119423   66178 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:33:36.119469   66178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:36.135747   66178 system_svc.go:56] duration metric: took 16.316004ms WaitForService to wait for kubelet
	I0717 01:33:36.135778   66178 kubeadm.go:582] duration metric: took 4m21.94286469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:33:36.135806   66178 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:33:36.140253   66178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:33:36.140274   66178 node_conditions.go:123] node cpu capacity is 2
	I0717 01:33:36.140285   66178 node_conditions.go:105] duration metric: took 4.473888ms to run NodePressure ...
	I0717 01:33:36.140296   66178 start.go:241] waiting for startup goroutines ...
	I0717 01:33:36.140306   66178 start.go:246] waiting for cluster config update ...
	I0717 01:33:36.140326   66178 start.go:255] writing updated cluster config ...
	I0717 01:33:36.140642   66178 ssh_runner.go:195] Run: rm -f paused
	I0717 01:33:36.188858   66178 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:33:36.191016   66178 out.go:177] * Done! kubectl is now configured to use "embed-certs-484167" cluster and "default" namespace by default
	I0717 01:33:35.376822   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:38.448812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:34.720645   66659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:33:43.308866   66659 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:33:43.308943   66659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:33:43.309108   66659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:33:43.309260   66659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:33:43.309392   66659 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:33:43.309485   66659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:33:43.311060   66659 out.go:204]   - Generating certificates and keys ...
	I0717 01:33:43.311120   66659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:33:43.311229   66659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:33:43.311320   66659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:33:43.311396   66659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:33:43.311505   66659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:33:43.311595   66659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:33:43.311682   66659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:33:43.311746   66659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:33:43.311807   66659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:33:43.311893   66659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:33:43.311960   66659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:33:43.312019   66659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:33:43.312083   66659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:33:43.312165   66659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:33:43.312247   66659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:33:43.312337   66659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:33:43.312395   66659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:33:43.312479   66659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:33:43.312534   66659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:33:43.313917   66659 out.go:204]   - Booting up control plane ...
	I0717 01:33:43.313994   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:33:43.314085   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:33:43.314183   66659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:33:43.314304   66659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:33:43.314415   66659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:33:43.314471   66659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:33:43.314608   66659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:33:43.314728   66659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:33:43.314817   66659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00137795s
	I0717 01:33:43.314955   66659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:33:43.315048   66659 kubeadm.go:310] [api-check] The API server is healthy after 5.002451289s
	I0717 01:33:43.315206   66659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 01:33:43.315310   66659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 01:33:43.315364   66659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 01:33:43.315550   66659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-945694 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 01:33:43.315640   66659 kubeadm.go:310] [bootstrap-token] Using token: eqtrsf.jetqj440l3wkhk98
	I0717 01:33:43.317933   66659 out.go:204]   - Configuring RBAC rules ...
	I0717 01:33:43.318050   66659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 01:33:43.318148   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 01:33:43.318293   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 01:33:43.318405   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 01:33:43.318513   66659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 01:33:43.318599   66659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 01:33:43.318755   66659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 01:33:43.318831   66659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 01:33:43.318883   66659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 01:33:43.318890   66659 kubeadm.go:310] 
	I0717 01:33:43.318937   66659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 01:33:43.318945   66659 kubeadm.go:310] 
	I0717 01:33:43.319058   66659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 01:33:43.319068   66659 kubeadm.go:310] 
	I0717 01:33:43.319102   66659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 01:33:43.319189   66659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 01:33:43.319251   66659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 01:33:43.319257   66659 kubeadm.go:310] 
	I0717 01:33:43.319333   66659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 01:33:43.319343   66659 kubeadm.go:310] 
	I0717 01:33:43.319407   66659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 01:33:43.319416   66659 kubeadm.go:310] 
	I0717 01:33:43.319485   66659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 01:33:43.319607   66659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 01:33:43.319690   66659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 01:33:43.319698   66659 kubeadm.go:310] 
	I0717 01:33:43.319797   66659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 01:33:43.319910   66659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 01:33:43.319925   66659 kubeadm.go:310] 
	I0717 01:33:43.320045   66659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token eqtrsf.jetqj440l3wkhk98 \
	I0717 01:33:43.320187   66659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 01:33:43.320232   66659 kubeadm.go:310] 	--control-plane 
	I0717 01:33:43.320239   66659 kubeadm.go:310] 
	I0717 01:33:43.320349   66659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 01:33:43.320359   66659 kubeadm.go:310] 
	I0717 01:33:43.320469   66659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token eqtrsf.jetqj440l3wkhk98 \
	I0717 01:33:43.320642   66659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 01:33:43.320672   66659 cni.go:84] Creating CNI manager for ""
	I0717 01:33:43.320685   66659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:33:43.322373   66659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:33:43.323549   66659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:33:43.336069   66659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:33:43.354981   66659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:33:43.355060   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:43.355068   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-945694 minikube.k8s.io/updated_at=2024_07_17T01_33_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=default-k8s-diff-port-945694 minikube.k8s.io/primary=true
	I0717 01:33:43.564470   66659 ops.go:34] apiserver oom_adj: -16
	I0717 01:33:43.564611   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:44.065352   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:44.528766   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:47.604799   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:44.565059   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:45.065658   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:45.565085   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:46.064718   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:46.564689   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:47.064998   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:47.564664   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:48.064694   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:48.565187   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:49.065439   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:49.564950   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:50.065001   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:50.565505   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:51.065369   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:51.564969   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:52.065293   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:52.564953   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:53.065324   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:53.565120   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:54.065189   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:54.565611   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:55.065105   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:55.565494   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.065453   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.565393   66659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:33:56.656280   66659 kubeadm.go:1113] duration metric: took 13.301288619s to wait for elevateKubeSystemPrivileges
	I0717 01:33:56.656319   66659 kubeadm.go:394] duration metric: took 5m12.994113939s to StartCluster
	I0717 01:33:56.656341   66659 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:56.656429   66659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:33:56.658062   66659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:56.658318   66659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:33:56.658384   66659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:33:56.658471   66659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658506   66659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.658516   66659 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:33:56.658514   66659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658545   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.658544   66659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-945694"
	I0717 01:33:56.658565   66659 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:33:56.658566   66659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-945694"
	I0717 01:33:56.658590   66659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.658603   66659 addons.go:243] addon metrics-server should already be in state true
	I0717 01:33:56.658631   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.658840   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.658867   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.658941   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.658967   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.658946   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.659047   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.660042   66659 out.go:177] * Verifying Kubernetes components...
	I0717 01:33:56.661365   66659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:33:56.675427   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0717 01:33:56.675919   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.676434   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.676455   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.676887   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.677764   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.677807   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.678856   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44785
	I0717 01:33:56.679033   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0717 01:33:56.679281   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.679550   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.680055   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.680079   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.680153   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.680173   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.680443   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.680523   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.680711   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.681210   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.681252   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.684317   66659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-945694"
	W0717 01:33:56.684338   66659 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:33:56.684362   66659 host.go:66] Checking if "default-k8s-diff-port-945694" exists ...
	I0717 01:33:56.684670   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.684706   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.693393   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0717 01:33:56.693836   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.694292   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.694309   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.694640   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.694801   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.696212   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.698217   66659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:33:56.699432   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:33:56.699455   66659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:33:56.699472   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.700565   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0717 01:33:56.701036   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.701563   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.701578   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.701920   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.702150   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.702903   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.703250   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.703275   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.703457   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.703732   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.703951   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.704282   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.704422   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.705576   66659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:33:56.707192   66659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:56.707207   66659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:33:56.707219   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.707551   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0717 01:33:56.708045   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.708589   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.708611   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.708957   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.709503   66659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:33:56.709545   66659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:56.710201   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.710818   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.710854   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.711103   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.711476   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.711751   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.711938   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.724041   66659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0717 01:33:56.724450   66659 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:56.724943   66659 main.go:141] libmachine: Using API Version  1
	I0717 01:33:56.724965   66659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:56.725264   66659 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:56.725481   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetState
	I0717 01:33:56.727357   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .DriverName
	I0717 01:33:56.727567   66659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:56.727579   66659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:33:56.727592   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHHostname
	I0717 01:33:56.730575   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.730916   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:3e:63", ip: ""} in network mk-default-k8s-diff-port-945694: {Iface:virbr2 ExpiryTime:2024-07-17 02:28:27 +0000 UTC Type:0 Mac:52:54:00:c9:3e:63 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:default-k8s-diff-port-945694 Clientid:01:52:54:00:c9:3e:63}
	I0717 01:33:56.730930   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | domain default-k8s-diff-port-945694 has defined IP address 192.168.50.30 and MAC address 52:54:00:c9:3e:63 in network mk-default-k8s-diff-port-945694
	I0717 01:33:56.731147   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHPort
	I0717 01:33:56.731295   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHKeyPath
	I0717 01:33:56.731414   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .GetSSHUsername
	I0717 01:33:56.731558   66659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/default-k8s-diff-port-945694/id_rsa Username:docker}
	I0717 01:33:56.880324   66659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:33:56.907224   66659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-945694" to be "Ready" ...
	I0717 01:33:56.916791   66659 node_ready.go:49] node "default-k8s-diff-port-945694" has status "Ready":"True"
	I0717 01:33:56.916814   66659 node_ready.go:38] duration metric: took 9.553813ms for node "default-k8s-diff-port-945694" to be "Ready" ...
	I0717 01:33:56.916825   66659 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:56.929744   66659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:56.991132   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:57.020549   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:33:57.020582   66659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:33:57.041856   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:57.095649   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:33:57.095672   66659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:33:57.145707   66659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:33:57.145737   66659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:33:57.220983   66659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:33:57.569863   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.569888   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.569966   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.569995   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570184   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570210   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.570221   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.570221   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570255   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570230   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570274   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570289   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.570314   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.570325   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.570476   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.570508   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.570514   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.572038   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.572054   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.572095   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.584086   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.584114   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.584383   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.584402   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.951559   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.951583   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.952039   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.952039   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) DBG | Closing plugin on server side
	I0717 01:33:57.952055   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.952068   66659 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:57.952076   66659 main.go:141] libmachine: (default-k8s-diff-port-945694) Calling .Close
	I0717 01:33:57.952317   66659 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:57.952328   66659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:57.952338   66659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-945694"
	I0717 01:33:57.954803   66659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:33:53.680800   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:56.752809   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:33:57.956002   66659 addons.go:510] duration metric: took 1.29761252s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:33:58.936404   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.936430   66659 pod_ready.go:81] duration metric: took 2.006657028s for pod "coredns-7db6d8ff4d-jbsq5" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.936440   66659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.940948   66659 pod_ready.go:92] pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.940968   66659 pod_ready.go:81] duration metric: took 4.522302ms for pod "coredns-7db6d8ff4d-mqjqg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.940976   66659 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.944815   66659 pod_ready.go:92] pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.944830   66659 pod_ready.go:81] duration metric: took 3.847888ms for pod "etcd-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.944838   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.949022   66659 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.949039   66659 pod_ready.go:81] duration metric: took 4.196556ms for pod "kube-apiserver-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.949049   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.953438   66659 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:58.953456   66659 pod_ready.go:81] duration metric: took 4.401091ms for pod "kube-controller-manager-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:58.953467   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55xmv" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.335149   66659 pod_ready.go:92] pod "kube-proxy-55xmv" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:59.335174   66659 pod_ready.go:81] duration metric: took 381.700119ms for pod "kube-proxy-55xmv" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.335187   66659 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.734445   66659 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:59.734473   66659 pod_ready.go:81] duration metric: took 399.276861ms for pod "kube-scheduler-default-k8s-diff-port-945694" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:59.734483   66659 pod_ready.go:38] duration metric: took 2.817646454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:59.734499   66659 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:59.734557   66659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:59.750547   66659 api_server.go:72] duration metric: took 3.092197547s to wait for apiserver process to appear ...
	I0717 01:33:59.750573   66659 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:59.750595   66659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8444/healthz ...
	I0717 01:33:59.755670   66659 api_server.go:279] https://192.168.50.30:8444/healthz returned 200:
	ok
	I0717 01:33:59.756553   66659 api_server.go:141] control plane version: v1.30.2
	I0717 01:33:59.756591   66659 api_server.go:131] duration metric: took 6.009468ms to wait for apiserver health ...
	I0717 01:33:59.756599   66659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:59.938573   66659 system_pods.go:59] 9 kube-system pods found
	I0717 01:33:59.938605   66659 system_pods.go:61] "coredns-7db6d8ff4d-jbsq5" [0a95f33d-19ef-4b2e-a94e-08bbcaff92dc] Running
	I0717 01:33:59.938611   66659 system_pods.go:61] "coredns-7db6d8ff4d-mqjqg" [ca27ce06-d171-4edd-9a1d-11898283f3ac] Running
	I0717 01:33:59.938615   66659 system_pods.go:61] "etcd-default-k8s-diff-port-945694" [213d53e1-92c9-4b8a-b9ff-6b7f12acd149] Running
	I0717 01:33:59.938618   66659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-945694" [b22e53fb-feec-4684-a672-f9c9b326bc36] Running
	I0717 01:33:59.938622   66659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-945694" [dc840bd9-5087-4642-8e84-8392d188e85f] Running
	I0717 01:33:59.938626   66659 system_pods.go:61] "kube-proxy-55xmv" [ee6913d5-3362-4a9f-a159-1f9b1da7380a] Running
	I0717 01:33:59.938631   66659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-945694" [7bfa8bdb-a9af-4e6b-8a11-f9b6791e2647] Running
	I0717 01:33:59.938640   66659 system_pods.go:61] "metrics-server-569cc877fc-4nffv" [ba214ec1-a180-42ec-847e-80464e102765] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:33:59.938646   66659 system_pods.go:61] "storage-provisioner" [3352a0de-41db-4537-b87a-24137084aa7a] Running
	I0717 01:33:59.938657   66659 system_pods.go:74] duration metric: took 182.050448ms to wait for pod list to return data ...
	I0717 01:33:59.938669   66659 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:34:00.133695   66659 default_sa.go:45] found service account: "default"
	I0717 01:34:00.133719   66659 default_sa.go:55] duration metric: took 195.042344ms for default service account to be created ...
	I0717 01:34:00.133729   66659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:34:00.338087   66659 system_pods.go:86] 9 kube-system pods found
	I0717 01:34:00.338127   66659 system_pods.go:89] "coredns-7db6d8ff4d-jbsq5" [0a95f33d-19ef-4b2e-a94e-08bbcaff92dc] Running
	I0717 01:34:00.338137   66659 system_pods.go:89] "coredns-7db6d8ff4d-mqjqg" [ca27ce06-d171-4edd-9a1d-11898283f3ac] Running
	I0717 01:34:00.338143   66659 system_pods.go:89] "etcd-default-k8s-diff-port-945694" [213d53e1-92c9-4b8a-b9ff-6b7f12acd149] Running
	I0717 01:34:00.338151   66659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-945694" [b22e53fb-feec-4684-a672-f9c9b326bc36] Running
	I0717 01:34:00.338159   66659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-945694" [dc840bd9-5087-4642-8e84-8392d188e85f] Running
	I0717 01:34:00.338166   66659 system_pods.go:89] "kube-proxy-55xmv" [ee6913d5-3362-4a9f-a159-1f9b1da7380a] Running
	I0717 01:34:00.338173   66659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-945694" [7bfa8bdb-a9af-4e6b-8a11-f9b6791e2647] Running
	I0717 01:34:00.338184   66659 system_pods.go:89] "metrics-server-569cc877fc-4nffv" [ba214ec1-a180-42ec-847e-80464e102765] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:34:00.338196   66659 system_pods.go:89] "storage-provisioner" [3352a0de-41db-4537-b87a-24137084aa7a] Running
	I0717 01:34:00.338205   66659 system_pods.go:126] duration metric: took 204.470489ms to wait for k8s-apps to be running ...
	I0717 01:34:00.338218   66659 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:34:00.338274   66659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:34:00.352151   66659 system_svc.go:56] duration metric: took 13.921542ms WaitForService to wait for kubelet
	I0717 01:34:00.352188   66659 kubeadm.go:582] duration metric: took 3.693843091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:34:00.352213   66659 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:34:00.535457   66659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:34:00.535478   66659 node_conditions.go:123] node cpu capacity is 2
	I0717 01:34:00.535489   66659 node_conditions.go:105] duration metric: took 183.271273ms to run NodePressure ...
	I0717 01:34:00.535500   66659 start.go:241] waiting for startup goroutines ...
	I0717 01:34:00.535506   66659 start.go:246] waiting for cluster config update ...
	I0717 01:34:00.535515   66659 start.go:255] writing updated cluster config ...
	I0717 01:34:00.535731   66659 ssh_runner.go:195] Run: rm -f paused
	I0717 01:34:00.581917   66659 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:34:00.583994   66659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-945694" cluster and "default" namespace by default
	I0717 01:34:02.832840   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:05.904845   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:11.984893   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:15.056813   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:21.136802   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:24.208771   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:30.288821   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:33.360818   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:39.440802   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:42.512824   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:48.592870   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:51.668822   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:34:57.744791   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:00.816890   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:06.896783   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:09.968897   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:16.048887   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:19.120810   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:25.200832   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:28.272897   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:34.352811   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:37.424805   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:43.504775   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:46.576767   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:52.656845   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:35:55.728841   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:01.808828   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:04.880828   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:10.964781   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:14.032790   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:20.112803   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:23.184780   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:29.264888   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:32.340810   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:38.416815   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:41.488801   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:47.572801   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:50.640840   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:56.720825   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:36:59.792797   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:05.876784   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:08.944812   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:15.024792   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:18.096815   69161 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0717 01:37:21.098660   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:37:21.098691   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:21.098996   69161 buildroot.go:166] provisioning hostname "no-preload-818382"
	I0717 01:37:21.099019   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:21.099239   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:21.100820   69161 machine.go:97] duration metric: took 4m37.425586326s to provisionDockerMachine
	I0717 01:37:21.100856   69161 fix.go:56] duration metric: took 4m37.44749197s for fixHost
	I0717 01:37:21.100862   69161 start.go:83] releasing machines lock for "no-preload-818382", held for 4m37.447517491s
	W0717 01:37:21.100875   69161 start.go:714] error starting host: provision: host is not running
	W0717 01:37:21.100944   69161 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:37:21.100953   69161 start.go:729] Will try again in 5 seconds ...
	I0717 01:37:26.102733   69161 start.go:360] acquireMachinesLock for no-preload-818382: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:37:26.102820   69161 start.go:364] duration metric: took 53.679µs to acquireMachinesLock for "no-preload-818382"
	I0717 01:37:26.102845   69161 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:37:26.102852   69161 fix.go:54] fixHost starting: 
	I0717 01:37:26.103150   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:37:26.103173   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:37:26.119906   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33241
	I0717 01:37:26.120394   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:37:26.120930   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:37:26.120952   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:37:26.121328   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:37:26.121541   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:26.121680   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:37:26.123050   69161 fix.go:112] recreateIfNeeded on no-preload-818382: state=Stopped err=<nil>
	I0717 01:37:26.123069   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	W0717 01:37:26.123226   69161 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:37:26.125020   69161 out.go:177] * Restarting existing kvm2 VM for "no-preload-818382" ...
	I0717 01:37:26.126273   69161 main.go:141] libmachine: (no-preload-818382) Calling .Start
	I0717 01:37:26.126469   69161 main.go:141] libmachine: (no-preload-818382) Ensuring networks are active...
	I0717 01:37:26.127225   69161 main.go:141] libmachine: (no-preload-818382) Ensuring network default is active
	I0717 01:37:26.127552   69161 main.go:141] libmachine: (no-preload-818382) Ensuring network mk-no-preload-818382 is active
	I0717 01:37:26.127899   69161 main.go:141] libmachine: (no-preload-818382) Getting domain xml...
	I0717 01:37:26.128571   69161 main.go:141] libmachine: (no-preload-818382) Creating domain...
	I0717 01:37:27.345119   69161 main.go:141] libmachine: (no-preload-818382) Waiting to get IP...
	I0717 01:37:27.346205   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.346716   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.346764   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.346681   70303 retry.go:31] will retry after 199.66464ms: waiting for machine to come up
	I0717 01:37:27.548206   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.548848   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.548873   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.548815   70303 retry.go:31] will retry after 280.929524ms: waiting for machine to come up
	I0717 01:37:27.831501   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:27.831934   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:27.831964   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:27.831916   70303 retry.go:31] will retry after 301.466781ms: waiting for machine to come up
	I0717 01:37:28.135465   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:28.135945   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:28.135981   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:28.135907   70303 retry.go:31] will retry after 393.103911ms: waiting for machine to come up
	I0717 01:37:28.530344   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:28.530791   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:28.530815   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:28.530761   70303 retry.go:31] will retry after 518.699896ms: waiting for machine to come up
	I0717 01:37:29.051266   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:29.051722   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:29.051763   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:29.051702   70303 retry.go:31] will retry after 618.253779ms: waiting for machine to come up
	I0717 01:37:29.671578   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:29.672083   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:29.672111   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:29.672032   70303 retry.go:31] will retry after 718.051367ms: waiting for machine to come up
	I0717 01:37:30.391904   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:30.392339   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:30.392367   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:30.392290   70303 retry.go:31] will retry after 1.040644293s: waiting for machine to come up
	I0717 01:37:31.434846   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:31.435419   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:31.435467   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:31.435401   70303 retry.go:31] will retry after 1.802022391s: waiting for machine to come up
	I0717 01:37:33.238798   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:33.239381   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:33.239409   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:33.239333   70303 retry.go:31] will retry after 1.417897015s: waiting for machine to come up
	I0717 01:37:34.658523   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:34.659018   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:34.659046   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:34.658971   70303 retry.go:31] will retry after 2.736057609s: waiting for machine to come up
	I0717 01:37:37.396582   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:37.397249   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:37.397279   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:37.397179   70303 retry.go:31] will retry after 2.2175965s: waiting for machine to come up
	I0717 01:37:39.616404   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:39.616819   69161 main.go:141] libmachine: (no-preload-818382) DBG | unable to find current IP address of domain no-preload-818382 in network mk-no-preload-818382
	I0717 01:37:39.616852   69161 main.go:141] libmachine: (no-preload-818382) DBG | I0717 01:37:39.616775   70303 retry.go:31] will retry after 4.136811081s: waiting for machine to come up
	I0717 01:37:43.754795   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.755339   69161 main.go:141] libmachine: (no-preload-818382) Found IP for machine: 192.168.39.38
	I0717 01:37:43.755364   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has current primary IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.755370   69161 main.go:141] libmachine: (no-preload-818382) Reserving static IP address...
	I0717 01:37:43.755825   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "no-preload-818382", mac: "52:54:00:e4:de:04", ip: "192.168.39.38"} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.755856   69161 main.go:141] libmachine: (no-preload-818382) Reserved static IP address: 192.168.39.38
	I0717 01:37:43.755870   69161 main.go:141] libmachine: (no-preload-818382) DBG | skip adding static IP to network mk-no-preload-818382 - found existing host DHCP lease matching {name: "no-preload-818382", mac: "52:54:00:e4:de:04", ip: "192.168.39.38"}
	I0717 01:37:43.755885   69161 main.go:141] libmachine: (no-preload-818382) DBG | Getting to WaitForSSH function...
	I0717 01:37:43.755893   69161 main.go:141] libmachine: (no-preload-818382) Waiting for SSH to be available...
	I0717 01:37:43.758007   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.758337   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.758366   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.758581   69161 main.go:141] libmachine: (no-preload-818382) DBG | Using SSH client type: external
	I0717 01:37:43.758615   69161 main.go:141] libmachine: (no-preload-818382) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa (-rw-------)
	I0717 01:37:43.758640   69161 main.go:141] libmachine: (no-preload-818382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:37:43.758650   69161 main.go:141] libmachine: (no-preload-818382) DBG | About to run SSH command:
	I0717 01:37:43.758662   69161 main.go:141] libmachine: (no-preload-818382) DBG | exit 0
	I0717 01:37:43.884574   69161 main.go:141] libmachine: (no-preload-818382) DBG | SSH cmd err, output: <nil>: 
	I0717 01:37:43.884894   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetConfigRaw
	I0717 01:37:43.885637   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:43.888140   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.888641   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.888673   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.888992   69161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/config.json ...
	I0717 01:37:43.889212   69161 machine.go:94] provisionDockerMachine start ...
	I0717 01:37:43.889237   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:43.889449   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:43.892095   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.892409   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:43.892451   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:43.892636   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:43.892814   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:43.892978   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:43.893129   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:43.893272   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:43.893472   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:43.893487   69161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:37:44.004698   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:37:44.004726   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.005009   69161 buildroot.go:166] provisioning hostname "no-preload-818382"
	I0717 01:37:44.005035   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.005206   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.008187   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.008700   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.008726   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.008920   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.009094   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.009286   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.009441   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.009612   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.009770   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.009781   69161 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-818382 && echo "no-preload-818382" | sudo tee /etc/hostname
	I0717 01:37:44.136253   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-818382
	
	I0717 01:37:44.136281   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.138973   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.139255   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.139284   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.139469   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.139643   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.139828   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.140012   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.140288   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.140479   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.140504   69161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-818382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-818382/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-818382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:37:44.266505   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:37:44.266534   69161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:37:44.266551   69161 buildroot.go:174] setting up certificates
	I0717 01:37:44.266562   69161 provision.go:84] configureAuth start
	I0717 01:37:44.266580   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetMachineName
	I0717 01:37:44.266878   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:44.269798   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.270235   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.270268   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.270404   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.272533   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.272880   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.272907   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.273042   69161 provision.go:143] copyHostCerts
	I0717 01:37:44.273125   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:37:44.273144   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:37:44.273206   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:37:44.273316   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:37:44.273326   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:37:44.273351   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:37:44.273410   69161 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:37:44.273414   69161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:37:44.273433   69161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:37:44.273487   69161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.no-preload-818382 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-818382]
	I0717 01:37:44.479434   69161 provision.go:177] copyRemoteCerts
	I0717 01:37:44.479494   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:37:44.479540   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.482477   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.482908   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.482946   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.483128   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.483327   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.483455   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.483580   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:44.571236   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:37:44.596972   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:37:44.621104   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:37:44.643869   69161 provision.go:87] duration metric: took 377.294141ms to configureAuth
	I0717 01:37:44.643898   69161 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:37:44.644105   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:37:44.644180   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.646792   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.647149   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.647179   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.647336   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.647539   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.647675   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.647780   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.647927   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:44.648096   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:44.648110   69161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:37:44.939532   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:37:44.939559   69161 machine.go:97] duration metric: took 1.050331351s to provisionDockerMachine
	I0717 01:37:44.939571   69161 start.go:293] postStartSetup for "no-preload-818382" (driver="kvm2")
	I0717 01:37:44.939587   69161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:37:44.939631   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:44.940024   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:37:44.940056   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:44.942783   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.943199   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:44.943225   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:44.943340   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:44.943504   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:44.943643   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:44.943806   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.027519   69161 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:37:45.031577   69161 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:37:45.031599   69161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:37:45.031667   69161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:37:45.031760   69161 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:37:45.031877   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:37:45.041021   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:37:45.064965   69161 start.go:296] duration metric: took 125.382388ms for postStartSetup
	I0717 01:37:45.064998   69161 fix.go:56] duration metric: took 18.96214661s for fixHost
	I0717 01:37:45.065016   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.067787   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.068183   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.068217   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.068340   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.068582   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.068751   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.068904   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.069063   69161 main.go:141] libmachine: Using SSH client type: native
	I0717 01:37:45.069226   69161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0717 01:37:45.069239   69161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:37:45.181490   69161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180265.155979386
	
	I0717 01:37:45.181513   69161 fix.go:216] guest clock: 1721180265.155979386
	I0717 01:37:45.181522   69161 fix.go:229] Guest: 2024-07-17 01:37:45.155979386 +0000 UTC Remote: 2024-07-17 01:37:45.065002166 +0000 UTC m=+301.553951222 (delta=90.97722ms)
	I0717 01:37:45.181546   69161 fix.go:200] guest clock delta is within tolerance: 90.97722ms
	I0717 01:37:45.181551   69161 start.go:83] releasing machines lock for "no-preload-818382", held for 19.07872127s
	I0717 01:37:45.181570   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.181832   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:45.184836   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.185246   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.185273   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.185420   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.185969   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.186161   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:37:45.186303   69161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:37:45.186354   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.186440   69161 ssh_runner.go:195] Run: cat /version.json
	I0717 01:37:45.186464   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:37:45.189106   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189351   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189501   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.189548   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189674   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.189876   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.189883   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:45.189910   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:45.189957   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:37:45.190062   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.190122   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:37:45.190251   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:37:45.190283   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.190505   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:37:45.273517   69161 ssh_runner.go:195] Run: systemctl --version
	I0717 01:37:45.297810   69161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:37:45.444285   69161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:37:45.450949   69161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:37:45.451015   69161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:37:45.469442   69161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:37:45.469470   69161 start.go:495] detecting cgroup driver to use...
	I0717 01:37:45.469534   69161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:37:45.488907   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:37:45.503268   69161 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:37:45.503336   69161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:37:45.516933   69161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:37:45.530525   69161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:37:45.642175   69161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:37:45.802107   69161 docker.go:233] disabling docker service ...
	I0717 01:37:45.802170   69161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:37:45.815967   69161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:37:45.827961   69161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:37:45.948333   69161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:37:46.066388   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:37:46.081332   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:37:46.102124   69161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:37:46.102209   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.113289   69161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:37:46.113361   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.123902   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.133825   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.143399   69161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:37:46.153336   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.163110   69161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.179869   69161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:37:46.190114   69161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:37:46.199740   69161 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:37:46.199791   69161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:37:46.212405   69161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:37:46.223444   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:37:46.337353   69161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:37:46.486553   69161 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:37:46.486616   69161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:37:46.491747   69161 start.go:563] Will wait 60s for crictl version
	I0717 01:37:46.491820   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.495749   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:37:46.537334   69161 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:37:46.537418   69161 ssh_runner.go:195] Run: crio --version
	I0717 01:37:46.566918   69161 ssh_runner.go:195] Run: crio --version
	I0717 01:37:46.598762   69161 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:37:46.600041   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetIP
	I0717 01:37:46.602939   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:46.603358   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:37:46.603387   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:37:46.603645   69161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:37:46.607975   69161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:37:46.621718   69161 kubeadm.go:883] updating cluster {Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:37:46.621869   69161 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:37:46.621921   69161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:37:46.657321   69161 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:37:46.657346   69161 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:37:46.657389   69161 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.657417   69161 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.657446   69161 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:37:46.657480   69161 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.657596   69161 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.657645   69161 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.657653   69161 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.657733   69161 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.659108   69161 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:37:46.659120   69161 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.659172   69161 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.659109   69161 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.659171   69161 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.659209   69161 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.659210   69161 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.659110   69161 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.818816   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.824725   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.825088   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.825902   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:46.830336   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:46.842814   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:37:46.876989   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:46.906964   69161 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:37:46.907012   69161 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:46.907060   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.953522   69161 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:37:46.953572   69161 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:46.953624   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:46.985236   69161 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:46.990623   69161 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:37:46.990667   69161 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:46.990715   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.000280   69161 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:37:47.000313   69161 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:47.000354   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.009927   69161 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:37:47.009976   69161 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:47.010045   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.124625   69161 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:37:47.124677   69161 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:47.124706   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:37:47.124718   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.124805   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:37:47.124853   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:37:47.124877   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:37:47.124906   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:37:47.124804   69161 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:37:47.124949   69161 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:47.124983   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:37:47.231159   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:47.231201   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:37:47.231217   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:37:47.231243   69161 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:37:47.231263   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:47.231302   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.231349   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:47.231414   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:47.231570   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:47.231431   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:47.231464   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:37:47.231715   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:47.279220   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:37:47.279239   69161 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.279286   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:37:47.293132   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:37:47.293233   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293243   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:37:47.293309   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:37:47.293313   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293338   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:37:47.293480   69161 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:37:47.293582   69161 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:37:51.052908   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.773599434s)
	I0717 01:37:51.052941   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:37:51.052963   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:51.052960   69161 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.759674708s)
	I0717 01:37:51.052994   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:37:51.053016   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:37:51.053020   69161 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.75941775s)
	I0717 01:37:51.053050   69161 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:37:52.809764   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.756726059s)
	I0717 01:37:52.809790   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:37:52.809818   69161 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:52.809884   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:37:54.565189   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.755280201s)
	I0717 01:37:54.565217   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:37:54.565251   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:54.565341   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:37:56.720406   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.155036511s)
	I0717 01:37:56.720439   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:37:56.720473   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:56.720538   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:37:58.168141   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.447572914s)
	I0717 01:37:58.168181   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:37:58.168216   69161 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:37:58.168278   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:38:00.033559   69161 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.865254148s)
	I0717 01:38:00.033590   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:38:00.033619   69161 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:38:00.033680   69161 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:38:00.885074   69161 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19265-12897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:38:00.885123   69161 cache_images.go:123] Successfully loaded all cached images
	I0717 01:38:00.885131   69161 cache_images.go:92] duration metric: took 14.22776998s to LoadCachedImages
	I0717 01:38:00.885149   69161 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:38:00.885276   69161 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-818382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:38:00.885360   69161 ssh_runner.go:195] Run: crio config
	I0717 01:38:00.935613   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:38:00.935637   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:00.935649   69161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:38:00.935674   69161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-818382 NodeName:no-preload-818382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:38:00.935799   69161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-818382"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:38:00.935866   69161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:38:00.946897   69161 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:38:00.946982   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:38:00.956493   69161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 01:38:00.974619   69161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:38:00.992580   69161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 01:38:01.009552   69161 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0717 01:38:01.013704   69161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:38:01.026053   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:01.150532   69161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:01.167166   69161 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382 for IP: 192.168.39.38
	I0717 01:38:01.167196   69161 certs.go:194] generating shared ca certs ...
	I0717 01:38:01.167219   69161 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:01.167398   69161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:38:01.167485   69161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:38:01.167504   69161 certs.go:256] generating profile certs ...
	I0717 01:38:01.167622   69161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/client.key
	I0717 01:38:01.167740   69161 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.key.0a44641a
	I0717 01:38:01.167811   69161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.key
	I0717 01:38:01.167996   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:38:01.168037   69161 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:38:01.168049   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:38:01.168094   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:38:01.168137   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:38:01.168176   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:38:01.168241   69161 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:38:01.169161   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:38:01.202385   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:38:01.236910   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:38:01.270000   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:38:01.306655   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:38:01.355634   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:38:01.386958   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:38:01.411202   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/no-preload-818382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:38:01.435949   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:38:01.460843   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:38:01.486827   69161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:38:01.511874   69161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:38:01.529784   69161 ssh_runner.go:195] Run: openssl version
	I0717 01:38:01.535968   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:38:01.547564   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.552546   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.552611   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:38:01.558592   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:38:01.569461   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:38:01.580422   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.585228   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.585276   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:38:01.591149   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:38:01.602249   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:38:01.614146   69161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.618807   69161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.618868   69161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:01.624861   69161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:38:01.635446   69161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:38:01.640287   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:38:01.646102   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:38:01.651967   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:38:01.658169   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:38:01.664359   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:38:01.670597   69161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:38:01.677288   69161 kubeadm.go:392] StartCluster: {Name:no-preload-818382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-818382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:38:01.677378   69161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:38:01.677434   69161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:01.718896   69161 cri.go:89] found id: ""
	I0717 01:38:01.718964   69161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:38:01.730404   69161 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:38:01.730426   69161 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:38:01.730467   69161 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:38:01.742131   69161 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:38:01.743114   69161 kubeconfig.go:125] found "no-preload-818382" server: "https://192.168.39.38:8443"
	I0717 01:38:01.745151   69161 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:38:01.755348   69161 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0717 01:38:01.755379   69161 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:38:01.755393   69161 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:38:01.755441   69161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:01.794585   69161 cri.go:89] found id: ""
	I0717 01:38:01.794657   69161 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:38:01.811878   69161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:38:01.822275   69161 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:38:01.822297   69161 kubeadm.go:157] found existing configuration files:
	
	I0717 01:38:01.822349   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:38:01.832295   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:38:01.832361   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:38:01.841853   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:38:01.850743   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:38:01.850792   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:38:01.860061   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:38:01.869640   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:38:01.869695   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:38:01.879146   69161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:38:01.888664   69161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:38:01.888730   69161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:38:01.898051   69161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:38:01.907209   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:02.013763   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.064624   69161 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.050830101s)
	I0717 01:38:03.064658   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.281880   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.360185   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:03.475762   69161 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:38:03.475859   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:03.976869   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:04.476826   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:38:04.513612   69161 api_server.go:72] duration metric: took 1.03785049s to wait for apiserver process to appear ...
	I0717 01:38:04.513637   69161 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:38:04.513658   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:04.514182   69161 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0717 01:38:05.013987   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:07.606646   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:38:07.606681   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:38:07.606698   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:07.644623   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:38:07.644659   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:38:08.014209   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:08.018649   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:38:08.018675   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:38:08.513802   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:08.523658   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:38:08.523683   69161 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:38:09.013997   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:38:09.018582   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0717 01:38:09.025524   69161 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:38:09.025556   69161 api_server.go:131] duration metric: took 4.511910476s to wait for apiserver health ...
	I0717 01:38:09.025567   69161 cni.go:84] Creating CNI manager for ""
	I0717 01:38:09.025576   69161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:09.026854   69161 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:38:09.028050   69161 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:38:09.054928   69161 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:38:09.099807   69161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:38:09.110763   69161 system_pods.go:59] 8 kube-system pods found
	I0717 01:38:09.110804   69161 system_pods.go:61] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:38:09.110817   69161 system_pods.go:61] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:38:09.110827   69161 system_pods.go:61] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:38:09.110835   69161 system_pods.go:61] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:38:09.110843   69161 system_pods.go:61] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:38:09.110852   69161 system_pods.go:61] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:38:09.110862   69161 system_pods.go:61] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:38:09.110872   69161 system_pods.go:61] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:38:09.110881   69161 system_pods.go:74] duration metric: took 11.048265ms to wait for pod list to return data ...
	I0717 01:38:09.110895   69161 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:38:09.115164   69161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:38:09.115185   69161 node_conditions.go:123] node cpu capacity is 2
	I0717 01:38:09.115195   69161 node_conditions.go:105] duration metric: took 4.295793ms to run NodePressure ...
	I0717 01:38:09.115222   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:38:09.380448   69161 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:38:09.385062   69161 kubeadm.go:739] kubelet initialised
	I0717 01:38:09.385081   69161 kubeadm.go:740] duration metric: took 4.609373ms waiting for restarted kubelet to initialise ...
	I0717 01:38:09.385089   69161 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:09.390128   69161 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.395089   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.395114   69161 pod_ready.go:81] duration metric: took 4.964286ms for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.395122   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.395130   69161 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.400466   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "etcd-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.400485   69161 pod_ready.go:81] duration metric: took 5.34752ms for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.400494   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "etcd-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.400502   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.406059   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-apiserver-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.406079   69161 pod_ready.go:81] duration metric: took 5.569824ms for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.406087   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-apiserver-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.406094   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.508478   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.508503   69161 pod_ready.go:81] duration metric: took 102.401908ms for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.508513   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.508521   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:09.903484   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-proxy-7xjgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.903507   69161 pod_ready.go:81] duration metric: took 394.977533ms for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:09.903516   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-proxy-7xjgl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:09.903522   69161 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:10.303374   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "kube-scheduler-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.303400   69161 pod_ready.go:81] duration metric: took 399.87153ms for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:10.303410   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "kube-scheduler-no-preload-818382" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.303417   69161 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:10.703844   69161 pod_ready.go:97] node "no-preload-818382" hosting pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.703872   69161 pod_ready.go:81] duration metric: took 400.446731ms for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	E0717 01:38:10.703882   69161 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-818382" hosting pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:10.703890   69161 pod_ready.go:38] duration metric: took 1.31879349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:10.703906   69161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:38:10.716314   69161 ops.go:34] apiserver oom_adj: -16
	I0717 01:38:10.716330   69161 kubeadm.go:597] duration metric: took 8.985898425s to restartPrimaryControlPlane
	I0717 01:38:10.716338   69161 kubeadm.go:394] duration metric: took 9.0390568s to StartCluster
	I0717 01:38:10.716357   69161 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.716443   69161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:38:10.718239   69161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.718467   69161 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:38:10.718525   69161 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:38:10.718599   69161 addons.go:69] Setting storage-provisioner=true in profile "no-preload-818382"
	I0717 01:38:10.718615   69161 addons.go:69] Setting default-storageclass=true in profile "no-preload-818382"
	I0717 01:38:10.718632   69161 addons.go:234] Setting addon storage-provisioner=true in "no-preload-818382"
	W0717 01:38:10.718641   69161 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:38:10.718657   69161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-818382"
	I0717 01:38:10.718648   69161 addons.go:69] Setting metrics-server=true in profile "no-preload-818382"
	I0717 01:38:10.718669   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.718684   69161 addons.go:234] Setting addon metrics-server=true in "no-preload-818382"
	W0717 01:38:10.718694   69161 addons.go:243] addon metrics-server should already be in state true
	I0717 01:38:10.718710   69161 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:38:10.718720   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.718995   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719013   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719033   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.719036   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.719037   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.719062   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.720225   69161 out.go:177] * Verifying Kubernetes components...
	I0717 01:38:10.721645   69161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:10.735669   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0717 01:38:10.735668   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
	I0717 01:38:10.736213   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.736224   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.736697   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.736712   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.736749   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.736761   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.737065   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.737104   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.737517   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0717 01:38:10.737604   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.737623   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.737632   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.737643   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.737988   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.738548   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.738575   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.738916   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.739154   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.742601   69161 addons.go:234] Setting addon default-storageclass=true in "no-preload-818382"
	W0717 01:38:10.742621   69161 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:38:10.742649   69161 host.go:66] Checking if "no-preload-818382" exists ...
	I0717 01:38:10.742978   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.743000   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.753050   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I0717 01:38:10.761069   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.761760   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.761778   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.762198   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.762374   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.764056   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.766144   69161 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:38:10.767506   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:38:10.767527   69161 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:38:10.767546   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.770625   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.771141   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.771169   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.771354   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.771538   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.771797   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.771964   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.777232   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0717 01:38:10.777667   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.778207   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.778234   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.778629   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.778820   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.780129   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43699
	I0717 01:38:10.780526   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.780732   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.781258   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.781283   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.781642   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.782089   69161 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:38:10.782134   69161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:10.782214   69161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:38:10.783466   69161 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:38:10.783484   69161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:38:10.783501   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.786557   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.786985   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.787006   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.787233   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.787393   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.787514   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.787610   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.798054   69161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0717 01:38:10.798498   69161 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:10.798922   69161 main.go:141] libmachine: Using API Version  1
	I0717 01:38:10.798942   69161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:10.799281   69161 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:10.799452   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetState
	I0717 01:38:10.801194   69161 main.go:141] libmachine: (no-preload-818382) Calling .DriverName
	I0717 01:38:10.801413   69161 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:38:10.801428   69161 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:38:10.801444   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHHostname
	I0717 01:38:10.804551   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.804963   69161 main.go:141] libmachine: (no-preload-818382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:de:04", ip: ""} in network mk-no-preload-818382: {Iface:virbr1 ExpiryTime:2024-07-17 02:37:36 +0000 UTC Type:0 Mac:52:54:00:e4:de:04 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-818382 Clientid:01:52:54:00:e4:de:04}
	I0717 01:38:10.804988   69161 main.go:141] libmachine: (no-preload-818382) DBG | domain no-preload-818382 has defined IP address 192.168.39.38 and MAC address 52:54:00:e4:de:04 in network mk-no-preload-818382
	I0717 01:38:10.805103   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHPort
	I0717 01:38:10.805413   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHKeyPath
	I0717 01:38:10.805564   69161 main.go:141] libmachine: (no-preload-818382) Calling .GetSSHUsername
	I0717 01:38:10.805712   69161 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/no-preload-818382/id_rsa Username:docker}
	I0717 01:38:10.941843   69161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:10.962485   69161 node_ready.go:35] waiting up to 6m0s for node "no-preload-818382" to be "Ready" ...
	I0717 01:38:11.029564   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:38:11.047993   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:38:11.180628   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:38:11.180648   69161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:38:11.254864   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:38:11.254891   69161 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:38:11.322266   69161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:38:11.322290   69161 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:38:11.386819   69161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:38:12.107148   69161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.059119392s)
	I0717 01:38:12.107209   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107223   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107351   69161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077746478s)
	I0717 01:38:12.107396   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107407   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107523   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107542   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.107553   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107562   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107751   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.107766   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107780   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.107789   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.107793   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.107798   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.107824   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.107831   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.108023   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.108056   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.108064   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.120981   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.121012   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.121920   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.121942   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.121958   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.192883   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.192908   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.193311   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.193357   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.193369   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.193378   69161 main.go:141] libmachine: Making call to close driver server
	I0717 01:38:12.193389   69161 main.go:141] libmachine: (no-preload-818382) Calling .Close
	I0717 01:38:12.193656   69161 main.go:141] libmachine: (no-preload-818382) DBG | Closing plugin on server side
	I0717 01:38:12.193695   69161 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:38:12.193704   69161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:38:12.193720   69161 addons.go:475] Verifying addon metrics-server=true in "no-preload-818382"
	I0717 01:38:12.196085   69161 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:38:12.197195   69161 addons.go:510] duration metric: took 1.478669603s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:38:12.968419   69161 node_ready.go:53] node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:15.466641   69161 node_ready.go:53] node "no-preload-818382" has status "Ready":"False"
	I0717 01:38:17.966396   69161 node_ready.go:49] node "no-preload-818382" has status "Ready":"True"
	I0717 01:38:17.966419   69161 node_ready.go:38] duration metric: took 7.003900387s for node "no-preload-818382" to be "Ready" ...
	I0717 01:38:17.966428   69161 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:38:17.972276   69161 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:17.979661   69161 pod_ready.go:92] pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:17.979686   69161 pod_ready.go:81] duration metric: took 7.383414ms for pod "coredns-5cfdc65f69-rzhfk" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:17.979700   69161 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:19.986664   69161 pod_ready.go:102] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:22.486306   69161 pod_ready.go:102] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:23.988340   69161 pod_ready.go:92] pod "etcd-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.988366   69161 pod_ready.go:81] duration metric: took 6.008658778s for pod "etcd-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.988379   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.994341   69161 pod_ready.go:92] pod "kube-apiserver-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.994369   69161 pod_ready.go:81] duration metric: took 5.983444ms for pod "kube-apiserver-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.994378   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.999839   69161 pod_ready.go:92] pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:23.999858   69161 pod_ready.go:81] duration metric: took 5.472052ms for pod "kube-controller-manager-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:23.999870   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.004359   69161 pod_ready.go:92] pod "kube-proxy-7xjgl" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:24.004376   69161 pod_ready.go:81] duration metric: took 4.499078ms for pod "kube-proxy-7xjgl" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.004388   69161 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.008711   69161 pod_ready.go:92] pod "kube-scheduler-no-preload-818382" in "kube-system" namespace has status "Ready":"True"
	I0717 01:38:24.008728   69161 pod_ready.go:81] duration metric: took 4.333011ms for pod "kube-scheduler-no-preload-818382" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:24.008738   69161 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	I0717 01:38:26.015816   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:28.515069   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:30.515823   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:33.015758   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:35.519125   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:38.015328   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:40.015434   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:42.016074   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:44.515165   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:46.515207   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:48.515526   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:51.015352   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:53.524771   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:55.525830   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:38:58.015294   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:00.016582   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:02.526596   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:05.017331   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:07.522994   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:10.015668   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:12.016581   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:14.514264   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:16.514483   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:18.514912   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:20.516805   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:23.017254   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:25.520744   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:27.525313   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:30.015300   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:32.515768   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:34.516472   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:37.015323   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:39.519189   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:41.519551   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:43.519612   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:46.015845   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:48.514995   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:51.015723   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:53.518041   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:56.016848   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:39:58.515231   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:01.014815   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:03.016104   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:05.515128   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:08.015053   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:10.515596   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:12.516108   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:15.016422   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:17.516656   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:20.023212   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:22.516829   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:25.015503   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:27.515818   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:29.516308   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:31.516354   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:34.014939   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:36.015491   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:38.515680   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:40.516729   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:43.015702   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:45.016597   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:47.516644   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:50.016083   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:52.016256   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:54.016658   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:56.019466   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:40:58.517513   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:01.015342   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:03.016255   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:05.017209   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:07.514660   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:09.515175   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:11.515986   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:14.016122   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:16.516248   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:19.016993   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:21.515181   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:23.515448   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:26.016226   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:28.516309   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:31.016068   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:33.516141   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:36.015057   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:38.015141   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:40.015943   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:42.515237   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:44.515403   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:46.516180   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:49.014892   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:51.019533   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:53.514629   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:55.515878   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:41:57.516813   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:00.016045   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:02.515848   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:05.017085   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:07.515218   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:10.016436   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:12.514412   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:14.515538   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:17.015473   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:19.516189   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:22.015149   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:24.015247   69161 pod_ready.go:102] pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:24.015279   69161 pod_ready.go:81] duration metric: took 4m0.006532152s for pod "metrics-server-78fcd8795b-vgkwg" in "kube-system" namespace to be "Ready" ...
	E0717 01:42:24.015291   69161 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:42:24.015300   69161 pod_ready.go:38] duration metric: took 4m6.048863476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:24.015319   69161 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:42:24.015354   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:24.015412   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:24.070533   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:24.070555   69161 cri.go:89] found id: ""
	I0717 01:42:24.070564   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:24.070624   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.075767   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:24.075844   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:24.118412   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:24.118434   69161 cri.go:89] found id: ""
	I0717 01:42:24.118442   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:24.118491   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.123255   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:24.123323   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:24.159858   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:24.159880   69161 cri.go:89] found id: ""
	I0717 01:42:24.159887   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:24.159938   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.164261   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:24.164333   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:24.201402   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:24.201429   69161 cri.go:89] found id: ""
	I0717 01:42:24.201438   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:24.201490   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.206056   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:24.206112   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:24.241083   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:24.241109   69161 cri.go:89] found id: ""
	I0717 01:42:24.241119   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:24.241177   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.245739   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:24.245794   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:24.284369   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:24.284400   69161 cri.go:89] found id: ""
	I0717 01:42:24.284410   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:24.284473   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.290128   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:24.290184   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:24.328815   69161 cri.go:89] found id: ""
	I0717 01:42:24.328841   69161 logs.go:276] 0 containers: []
	W0717 01:42:24.328848   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:24.328854   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:24.328919   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:24.365591   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:24.365614   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:24.365621   69161 cri.go:89] found id: ""
	I0717 01:42:24.365630   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:24.365690   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.370614   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:24.375611   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:24.375641   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:24.392837   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:24.392872   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:24.443010   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:24.443036   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:24.482837   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:24.482870   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:24.536236   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:24.536262   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:24.576709   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:24.576740   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:24.625042   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:24.625069   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:24.679911   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:24.679945   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:24.721782   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:24.721809   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:24.775881   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:24.775916   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:24.917773   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:24.917806   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:24.962644   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:24.962673   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:25.002204   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:25.002242   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:28.032243   69161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:42:28.049580   69161 api_server.go:72] duration metric: took 4m17.331083879s to wait for apiserver process to appear ...
	I0717 01:42:28.049612   69161 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:42:28.049656   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:28.049717   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:28.088496   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:28.088519   69161 cri.go:89] found id: ""
	I0717 01:42:28.088527   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:28.088598   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.092659   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:28.092712   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:28.127205   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:28.127224   69161 cri.go:89] found id: ""
	I0717 01:42:28.127231   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:28.127276   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.131356   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:28.131425   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:28.166535   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:28.166556   69161 cri.go:89] found id: ""
	I0717 01:42:28.166564   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:28.166608   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.170576   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:28.170633   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:28.204842   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:28.204863   69161 cri.go:89] found id: ""
	I0717 01:42:28.204871   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:28.204924   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.208869   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:28.208922   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:28.241397   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:28.241414   69161 cri.go:89] found id: ""
	I0717 01:42:28.241421   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:28.241461   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.245569   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:28.245630   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:28.282072   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:28.282097   69161 cri.go:89] found id: ""
	I0717 01:42:28.282106   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:28.282159   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.286678   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:28.286738   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:28.320229   69161 cri.go:89] found id: ""
	I0717 01:42:28.320255   69161 logs.go:276] 0 containers: []
	W0717 01:42:28.320265   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:28.320271   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:28.320321   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:28.358955   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:28.358979   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:28.358985   69161 cri.go:89] found id: ""
	I0717 01:42:28.358992   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:28.359051   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.363407   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:28.367862   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:28.367886   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:28.405920   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:28.405948   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:28.442790   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:28.442814   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:28.507947   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:28.507977   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:28.543353   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:28.543375   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:28.591451   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:28.591484   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:29.046193   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:29.046234   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:29.093710   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:29.093743   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:29.132784   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:29.132811   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:29.148146   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:29.148176   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:29.250655   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:29.250682   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:29.295193   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:29.295222   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:29.330372   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:29.330404   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:31.882296   69161 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0717 01:42:31.887420   69161 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0717 01:42:31.889130   69161 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:42:31.889151   69161 api_server.go:131] duration metric: took 3.839533176s to wait for apiserver health ...
	I0717 01:42:31.889159   69161 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:31.889180   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:31.889231   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:31.932339   69161 cri.go:89] found id: "8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:31.932359   69161 cri.go:89] found id: ""
	I0717 01:42:31.932369   69161 logs.go:276] 1 containers: [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2]
	I0717 01:42:31.932428   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:31.936635   69161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:31.936694   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:31.973771   69161 cri.go:89] found id: "0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:31.973797   69161 cri.go:89] found id: ""
	I0717 01:42:31.973805   69161 logs.go:276] 1 containers: [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf]
	I0717 01:42:31.973864   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:31.978328   69161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:31.978400   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:32.017561   69161 cri.go:89] found id: "e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:32.017589   69161 cri.go:89] found id: ""
	I0717 01:42:32.017598   69161 logs.go:276] 1 containers: [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902]
	I0717 01:42:32.017652   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.021983   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:32.022043   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:32.060032   69161 cri.go:89] found id: "b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:32.060058   69161 cri.go:89] found id: ""
	I0717 01:42:32.060067   69161 logs.go:276] 1 containers: [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc]
	I0717 01:42:32.060124   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.064390   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:32.064447   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:32.104292   69161 cri.go:89] found id: "98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:32.104314   69161 cri.go:89] found id: ""
	I0717 01:42:32.104322   69161 logs.go:276] 1 containers: [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571]
	I0717 01:42:32.104378   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.108874   69161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:32.108939   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:32.151590   69161 cri.go:89] found id: "7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:32.151611   69161 cri.go:89] found id: ""
	I0717 01:42:32.151619   69161 logs.go:276] 1 containers: [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e]
	I0717 01:42:32.151683   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.155683   69161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:32.155749   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:32.191197   69161 cri.go:89] found id: ""
	I0717 01:42:32.191224   69161 logs.go:276] 0 containers: []
	W0717 01:42:32.191235   69161 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:32.191250   69161 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:42:32.191315   69161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:42:32.228709   69161 cri.go:89] found id: "da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:32.228729   69161 cri.go:89] found id: "b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:32.228734   69161 cri.go:89] found id: ""
	I0717 01:42:32.228741   69161 logs.go:276] 2 containers: [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a]
	I0717 01:42:32.228825   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.234032   69161 ssh_runner.go:195] Run: which crictl
	I0717 01:42:32.239566   69161 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:32.239588   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:32.254327   69161 logs.go:123] Gathering logs for kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] ...
	I0717 01:42:32.254353   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2"
	I0717 01:42:32.313682   69161 logs.go:123] Gathering logs for etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] ...
	I0717 01:42:32.313709   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf"
	I0717 01:42:32.354250   69161 logs.go:123] Gathering logs for kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] ...
	I0717 01:42:32.354278   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e"
	I0717 01:42:32.404452   69161 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:32.404490   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:32.824059   69161 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:32.824092   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:32.877614   69161 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:32.877645   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:42:32.987728   69161 logs.go:123] Gathering logs for coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] ...
	I0717 01:42:32.987756   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902"
	I0717 01:42:33.028146   69161 logs.go:123] Gathering logs for kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] ...
	I0717 01:42:33.028183   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc"
	I0717 01:42:33.067880   69161 logs.go:123] Gathering logs for kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] ...
	I0717 01:42:33.067907   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571"
	I0717 01:42:33.106837   69161 logs.go:123] Gathering logs for storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] ...
	I0717 01:42:33.106870   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461"
	I0717 01:42:33.141500   69161 logs.go:123] Gathering logs for storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] ...
	I0717 01:42:33.141530   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a"
	I0717 01:42:33.183960   69161 logs.go:123] Gathering logs for container status ...
	I0717 01:42:33.183991   69161 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:42:35.738491   69161 system_pods.go:59] 8 kube-system pods found
	I0717 01:42:35.738522   69161 system_pods.go:61] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running
	I0717 01:42:35.738526   69161 system_pods.go:61] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running
	I0717 01:42:35.738531   69161 system_pods.go:61] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running
	I0717 01:42:35.738536   69161 system_pods.go:61] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running
	I0717 01:42:35.738551   69161 system_pods.go:61] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running
	I0717 01:42:35.738558   69161 system_pods.go:61] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running
	I0717 01:42:35.738567   69161 system_pods.go:61] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:42:35.738573   69161 system_pods.go:61] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running
	I0717 01:42:35.738583   69161 system_pods.go:74] duration metric: took 3.849417383s to wait for pod list to return data ...
	I0717 01:42:35.738596   69161 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:42:35.741135   69161 default_sa.go:45] found service account: "default"
	I0717 01:42:35.741154   69161 default_sa.go:55] duration metric: took 2.55225ms for default service account to be created ...
	I0717 01:42:35.741160   69161 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:42:35.745925   69161 system_pods.go:86] 8 kube-system pods found
	I0717 01:42:35.745944   69161 system_pods.go:89] "coredns-5cfdc65f69-rzhfk" [eb91980f-dca7-4dd0-902e-7d1ffac4e1b7] Running
	I0717 01:42:35.745949   69161 system_pods.go:89] "etcd-no-preload-818382" [99688a8a-50fc-416b-9c00-23a516eab775] Running
	I0717 01:42:35.745953   69161 system_pods.go:89] "kube-apiserver-no-preload-818382" [3e08eb95-84f7-4541-a2c3-9a5b9e3365f9] Running
	I0717 01:42:35.745957   69161 system_pods.go:89] "kube-controller-manager-no-preload-818382" [d356be23-8cd9-4f72-94e6-354a39f587eb] Running
	I0717 01:42:35.745961   69161 system_pods.go:89] "kube-proxy-7xjgl" [79ab1bff-5791-464d-98a0-041c53c47234] Running
	I0717 01:42:35.745965   69161 system_pods.go:89] "kube-scheduler-no-preload-818382" [e148b48b-ee09-49b4-9600-83c039254f29] Running
	I0717 01:42:35.745971   69161 system_pods.go:89] "metrics-server-78fcd8795b-vgkwg" [6386b732-76a6-4744-9215-e4764e08e4e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:42:35.745977   69161 system_pods.go:89] "storage-provisioner" [c5a0695e-6c38-463e-8f96-60c0e60c7132] Running
	I0717 01:42:35.745986   69161 system_pods.go:126] duration metric: took 4.820763ms to wait for k8s-apps to be running ...
	I0717 01:42:35.745994   69161 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:42:35.746043   69161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:42:35.763979   69161 system_svc.go:56] duration metric: took 17.975443ms WaitForService to wait for kubelet
	I0717 01:42:35.764007   69161 kubeadm.go:582] duration metric: took 4m25.045517006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:42:35.764027   69161 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:42:35.768267   69161 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:42:35.768297   69161 node_conditions.go:123] node cpu capacity is 2
	I0717 01:42:35.768312   69161 node_conditions.go:105] duration metric: took 4.280712ms to run NodePressure ...
	I0717 01:42:35.768337   69161 start.go:241] waiting for startup goroutines ...
	I0717 01:42:35.768347   69161 start.go:246] waiting for cluster config update ...
	I0717 01:42:35.768374   69161 start.go:255] writing updated cluster config ...
	I0717 01:42:35.768681   69161 ssh_runner.go:195] Run: rm -f paused
	I0717 01:42:35.817223   69161 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 01:42:35.819333   69161 out.go:177] * Done! kubectl is now configured to use "no-preload-818382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.684935027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180612684910229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bcda994-0adb-4b4d-912f-a19e1523094d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.685618672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93990955-4a6c-48d6-a650-ccccf34f0533 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.685685294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93990955-4a6c-48d6-a650-ccccf34f0533 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.685716663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=93990955-4a6c-48d6-a650-ccccf34f0533 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.718995020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d517130b-4788-414f-9322-1db43dbbb735 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.719139790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d517130b-4788-414f-9322-1db43dbbb735 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.721347083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b76978c-7b76-4621-b4d4-5ca236ea9799 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.721767681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180612721746433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b76978c-7b76-4621-b4d4-5ca236ea9799 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.722459882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0987a221-a247-46dd-b603-e1f968210e7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.722540732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0987a221-a247-46dd-b603-e1f968210e7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.722594625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0987a221-a247-46dd-b603-e1f968210e7f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.757026051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0691c00-f240-4bc9-afef-8f847e4cc3b0 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.757097914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0691c00-f240-4bc9-afef-8f847e4cc3b0 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.757972845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3676d507-53d2-485e-8cc9-39934ec5c448 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.758434675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180612758409507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3676d507-53d2-485e-8cc9-39934ec5c448 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.758863880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a383f48f-65e0-4f17-825b-68a263184662 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.758928004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a383f48f-65e0-4f17-825b-68a263184662 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.758960826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a383f48f-65e0-4f17-825b-68a263184662 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.793331287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e87fd61c-e21e-4f4c-b2e2-ed29f4c60fa2 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.793422295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e87fd61c-e21e-4f4c-b2e2-ed29f4c60fa2 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.794355953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8731857f-7fc8-400e-83f4-15badbfe9b13 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.794759163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180612794735851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8731857f-7fc8-400e-83f4-15badbfe9b13 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.795286105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07e248f1-c1fa-48e8-8e98-978269c75cb5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.795336754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07e248f1-c1fa-48e8-8e98-978269c75cb5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:32 old-k8s-version-249342 crio[653]: time="2024-07-17 01:43:32.795366237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=07e248f1-c1fa-48e8-8e98-978269c75cb5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 01:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053856] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.738175] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul17 01:21] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586475] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.258109] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060071] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055484] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.214160] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.115956] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.256032] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +6.048119] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.063005] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.184126] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[ +10.091636] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 01:25] systemd-fstab-generator[5033]: Ignoring "noauto" option for root device
	[Jul17 01:27] systemd-fstab-generator[5317]: Ignoring "noauto" option for root device
	[  +0.061098] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:43:32 up 22 min,  0 users,  load average: 0.10, 0.08, 0.03
	Linux old-k8s-version-249342 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc0009413b0)
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]: goroutine 159 [select]:
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c87ef0, 0x4f0ac20, 0xc00091ae60, 0x1, 0xc0001000c0)
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002567e0, 0xc0001000c0)
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009aa490, 0xc00091d880)
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 17 01:43:29 old-k8s-version-249342 kubelet[7072]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 17 01:43:29 old-k8s-version-249342 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 01:43:29 old-k8s-version-249342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 01:43:30 old-k8s-version-249342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 168.
	Jul 17 01:43:30 old-k8s-version-249342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 01:43:30 old-k8s-version-249342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 01:43:30 old-k8s-version-249342 kubelet[7081]: I0717 01:43:30.228609    7081 server.go:416] Version: v1.20.0
	Jul 17 01:43:30 old-k8s-version-249342 kubelet[7081]: I0717 01:43:30.228879    7081 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 01:43:30 old-k8s-version-249342 kubelet[7081]: I0717 01:43:30.230768    7081 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 01:43:30 old-k8s-version-249342 kubelet[7081]: W0717 01:43:30.231675    7081 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 01:43:30 old-k8s-version-249342 kubelet[7081]: I0717 01:43:30.231824    7081 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 2 (231.747466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-249342" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (312.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-818382 -n no-preload-818382
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 01:51:36.380789757 +0000 UTC m=+6427.184935589
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-818382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-818382 logs -n 25: (1.344794676s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| start   | -p enable-default-cni-453036                         | enable-default-cni-453036 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo docker                         | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo                                | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo find                           | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo crio                           | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p calico-453036                                     | calico-453036             | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	| start   | -p flannel-453036                                    | flannel-453036            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-453036 pgrep                       | custom-flannel-453036     | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | -a kubelet                                           |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:51:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:51:03.995807   80566 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:03.995943   80566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:03.995956   80566 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:03.995964   80566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:03.996247   80566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:51:03.997092   80566 out.go:298] Setting JSON to false
	I0717 01:51:03.998596   80566 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9213,"bootTime":1721171851,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:03.998684   80566 start.go:139] virtualization: kvm guest
	I0717 01:51:04.001223   80566 out.go:177] * [flannel-453036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:04.002734   80566 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:51:04.002807   80566 notify.go:220] Checking for updates...
	I0717 01:51:04.005481   80566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:04.006879   80566 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:51:04.008374   80566 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:51:04.009770   80566 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:04.011154   80566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:04.012960   80566 config.go:182] Loaded profile config "custom-flannel-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:51:04.013077   80566 config.go:182] Loaded profile config "enable-default-cni-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:51:04.013197   80566 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:51:04.013294   80566 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:04.054771   80566 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:51:04.056120   80566 start.go:297] selected driver: kvm2
	I0717 01:51:04.056140   80566 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:51:04.056154   80566 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:04.056953   80566 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:04.057045   80566 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:04.073223   80566 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:04.073308   80566 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:51:04.073614   80566 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:04.073651   80566 cni.go:84] Creating CNI manager for "flannel"
	I0717 01:51:04.073664   80566 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0717 01:51:04.073740   80566 start.go:340] cluster config:
	{Name:flannel-453036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:04.073874   80566 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:04.075914   80566 out.go:177] * Starting "flannel-453036" primary control-plane node in "flannel-453036" cluster
	I0717 01:51:01.330149   78395 node_ready.go:49] node "custom-flannel-453036" has status "Ready":"True"
	I0717 01:51:01.330171   78395 node_ready.go:38] duration metric: took 5.007883713s for node "custom-flannel-453036" to be "Ready" ...
	I0717 01:51:01.330180   78395 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:51:01.364673   78395 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:03.373348   78395 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace has status "Ready":"False"
	I0717 01:51:05.872512   78395 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace has status "Ready":"False"
	I0717 01:51:04.146440   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:04.146893   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | unable to find current IP address of domain enable-default-cni-453036 in network mk-enable-default-cni-453036
	I0717 01:51:04.146920   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | I0717 01:51:04.146843   79810 retry.go:31] will retry after 1.292448474s: waiting for machine to come up
	I0717 01:51:05.441496   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:05.442182   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | unable to find current IP address of domain enable-default-cni-453036 in network mk-enable-default-cni-453036
	I0717 01:51:05.442210   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | I0717 01:51:05.442135   79810 retry.go:31] will retry after 1.244774651s: waiting for machine to come up
	I0717 01:51:06.688959   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:06.689513   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | unable to find current IP address of domain enable-default-cni-453036 in network mk-enable-default-cni-453036
	I0717 01:51:06.689542   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | I0717 01:51:06.689461   79810 retry.go:31] will retry after 2.18907711s: waiting for machine to come up
	I0717 01:51:04.077738   80566 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:51:04.077786   80566 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:04.077800   80566 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:04.077926   80566 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:04.077940   80566 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:51:04.078047   80566 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/config.json ...
	I0717 01:51:04.078078   80566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/config.json: {Name:mk257dfec4b8e1d6dbf38a89223d331c968bad92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:04.078239   80566 start.go:360] acquireMachinesLock for flannel-453036: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:51:07.874888   78395 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace has status "Ready":"False"
	I0717 01:51:10.372122   78395 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace has status "Ready":"False"
	I0717 01:51:08.880727   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:08.881242   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | unable to find current IP address of domain enable-default-cni-453036 in network mk-enable-default-cni-453036
	I0717 01:51:08.881282   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | I0717 01:51:08.881213   79810 retry.go:31] will retry after 2.208647324s: waiting for machine to come up
	I0717 01:51:11.091020   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:11.091636   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | unable to find current IP address of domain enable-default-cni-453036 in network mk-enable-default-cni-453036
	I0717 01:51:11.091660   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | I0717 01:51:11.091600   79810 retry.go:31] will retry after 2.303273922s: waiting for machine to come up
	I0717 01:51:12.872244   78395 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace has status "Ready":"False"
	I0717 01:51:15.374783   78395 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace has status "Ready":"False"
	I0717 01:51:13.396434   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:13.396913   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | unable to find current IP address of domain enable-default-cni-453036 in network mk-enable-default-cni-453036
	I0717 01:51:13.396932   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | I0717 01:51:13.396876   79810 retry.go:31] will retry after 4.176708217s: waiting for machine to come up
	I0717 01:51:17.575915   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:17.576452   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | unable to find current IP address of domain enable-default-cni-453036 in network mk-enable-default-cni-453036
	I0717 01:51:17.576480   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | I0717 01:51:17.576402   79810 retry.go:31] will retry after 4.117915362s: waiting for machine to come up
	I0717 01:51:17.383947   78395 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace has status "Ready":"False"
	I0717 01:51:18.870210   78395 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:51:18.870232   78395 pod_ready.go:81] duration metric: took 17.505484463s for pod "coredns-7db6d8ff4d-gnsjk" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.870241   78395 pod_ready.go:78] waiting up to 15m0s for pod "etcd-custom-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.874342   78395 pod_ready.go:92] pod "etcd-custom-flannel-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:51:18.874357   78395 pod_ready.go:81] duration metric: took 4.110684ms for pod "etcd-custom-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.874364   78395 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.878353   78395 pod_ready.go:92] pod "kube-apiserver-custom-flannel-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:51:18.878368   78395 pod_ready.go:81] duration metric: took 3.99825ms for pod "kube-apiserver-custom-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.878375   78395 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.881772   78395 pod_ready.go:92] pod "kube-controller-manager-custom-flannel-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:51:18.881787   78395 pod_ready.go:81] duration metric: took 3.40664ms for pod "kube-controller-manager-custom-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.881795   78395 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-wp5x9" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.885702   78395 pod_ready.go:92] pod "kube-proxy-wp5x9" in "kube-system" namespace has status "Ready":"True"
	I0717 01:51:18.885716   78395 pod_ready.go:81] duration metric: took 3.915934ms for pod "kube-proxy-wp5x9" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:18.885723   78395 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:19.270400   78395 pod_ready.go:92] pod "kube-scheduler-custom-flannel-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:51:19.270427   78395 pod_ready.go:81] duration metric: took 384.696405ms for pod "kube-scheduler-custom-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:19.270443   78395 pod_ready.go:38] duration metric: took 17.940253059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:51:19.270461   78395 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:51:19.270524   78395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:51:19.288930   78395 api_server.go:72] duration metric: took 23.270642526s to wait for apiserver process to appear ...
	I0717 01:51:19.288954   78395 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:51:19.288971   78395 api_server.go:253] Checking apiserver healthz at https://192.168.72.187:8443/healthz ...
	I0717 01:51:19.293938   78395 api_server.go:279] https://192.168.72.187:8443/healthz returned 200:
	ok
	I0717 01:51:19.295013   78395 api_server.go:141] control plane version: v1.30.2
	I0717 01:51:19.295033   78395 api_server.go:131] duration metric: took 6.073123ms to wait for apiserver health ...
	I0717 01:51:19.295040   78395 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:51:19.472441   78395 system_pods.go:59] 7 kube-system pods found
	I0717 01:51:19.472480   78395 system_pods.go:61] "coredns-7db6d8ff4d-gnsjk" [44de6039-eb07-407a-8733-cbfffe6c3834] Running
	I0717 01:51:19.472486   78395 system_pods.go:61] "etcd-custom-flannel-453036" [3ff429bd-f696-4ace-b411-47344c4e7a3b] Running
	I0717 01:51:19.472490   78395 system_pods.go:61] "kube-apiserver-custom-flannel-453036" [42d647e7-c93c-4c51-be83-6720cb1384fb] Running
	I0717 01:51:19.472494   78395 system_pods.go:61] "kube-controller-manager-custom-flannel-453036" [993a580d-f5ce-465c-ac3b-2ac87312b5a9] Running
	I0717 01:51:19.472497   78395 system_pods.go:61] "kube-proxy-wp5x9" [b2703285-de39-4fc4-ba92-27875c2d48ca] Running
	I0717 01:51:19.472501   78395 system_pods.go:61] "kube-scheduler-custom-flannel-453036" [38f79c44-667e-4f90-8a29-934c9c9fa0e2] Running
	I0717 01:51:19.472504   78395 system_pods.go:61] "storage-provisioner" [aa950b92-42c2-4e49-8907-3b596b27c8c0] Running
	I0717 01:51:19.472510   78395 system_pods.go:74] duration metric: took 177.46453ms to wait for pod list to return data ...
	I0717 01:51:19.472517   78395 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:51:19.668683   78395 default_sa.go:45] found service account: "default"
	I0717 01:51:19.668708   78395 default_sa.go:55] duration metric: took 196.183503ms for default service account to be created ...
	I0717 01:51:19.668717   78395 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:51:19.872971   78395 system_pods.go:86] 7 kube-system pods found
	I0717 01:51:19.873000   78395 system_pods.go:89] "coredns-7db6d8ff4d-gnsjk" [44de6039-eb07-407a-8733-cbfffe6c3834] Running
	I0717 01:51:19.873006   78395 system_pods.go:89] "etcd-custom-flannel-453036" [3ff429bd-f696-4ace-b411-47344c4e7a3b] Running
	I0717 01:51:19.873011   78395 system_pods.go:89] "kube-apiserver-custom-flannel-453036" [42d647e7-c93c-4c51-be83-6720cb1384fb] Running
	I0717 01:51:19.873015   78395 system_pods.go:89] "kube-controller-manager-custom-flannel-453036" [993a580d-f5ce-465c-ac3b-2ac87312b5a9] Running
	I0717 01:51:19.873019   78395 system_pods.go:89] "kube-proxy-wp5x9" [b2703285-de39-4fc4-ba92-27875c2d48ca] Running
	I0717 01:51:19.873023   78395 system_pods.go:89] "kube-scheduler-custom-flannel-453036" [38f79c44-667e-4f90-8a29-934c9c9fa0e2] Running
	I0717 01:51:19.873026   78395 system_pods.go:89] "storage-provisioner" [aa950b92-42c2-4e49-8907-3b596b27c8c0] Running
	I0717 01:51:19.873032   78395 system_pods.go:126] duration metric: took 204.310569ms to wait for k8s-apps to be running ...
	I0717 01:51:19.873039   78395 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:51:19.873081   78395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:51:19.888606   78395 system_svc.go:56] duration metric: took 15.55395ms WaitForService to wait for kubelet
	I0717 01:51:19.888647   78395 kubeadm.go:582] duration metric: took 23.870363586s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:19.888678   78395 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:51:20.068481   78395 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:51:20.068508   78395 node_conditions.go:123] node cpu capacity is 2
	I0717 01:51:20.068518   78395 node_conditions.go:105] duration metric: took 179.834504ms to run NodePressure ...
	I0717 01:51:20.068531   78395 start.go:241] waiting for startup goroutines ...
	I0717 01:51:20.068538   78395 start.go:246] waiting for cluster config update ...
	I0717 01:51:20.068546   78395 start.go:255] writing updated cluster config ...
	I0717 01:51:20.068845   78395 ssh_runner.go:195] Run: rm -f paused
	I0717 01:51:20.118151   78395 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:51:20.120195   78395 out.go:177] * Done! kubectl is now configured to use "custom-flannel-453036" cluster and "default" namespace by default
	I0717 01:51:21.697781   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:21.698406   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has current primary IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:21.698434   79788 main.go:141] libmachine: (enable-default-cni-453036) Found IP for machine: 192.168.50.111
	I0717 01:51:21.698459   79788 main.go:141] libmachine: (enable-default-cni-453036) Reserving static IP address...
	I0717 01:51:21.698929   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-453036", mac: "52:54:00:09:94:be", ip: "192.168.50.111"} in network mk-enable-default-cni-453036
	I0717 01:51:21.783117   79788 main.go:141] libmachine: (enable-default-cni-453036) Reserved static IP address: 192.168.50.111
	I0717 01:51:21.783146   79788 main.go:141] libmachine: (enable-default-cni-453036) Waiting for SSH to be available...
	I0717 01:51:21.783156   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | Getting to WaitForSSH function...
	I0717 01:51:21.786018   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:21.786482   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:09:94:be}
	I0717 01:51:21.786514   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:21.786696   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | Using SSH client type: external
	I0717 01:51:21.786718   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/enable-default-cni-453036/id_rsa (-rw-------)
	I0717 01:51:21.786761   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/enable-default-cni-453036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:51:21.786776   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | About to run SSH command:
	I0717 01:51:21.786818   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | exit 0
	I0717 01:51:21.917252   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | SSH cmd err, output: <nil>: 
	I0717 01:51:21.917561   79788 main.go:141] libmachine: (enable-default-cni-453036) KVM machine creation complete!
	I0717 01:51:21.917892   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetConfigRaw
	I0717 01:51:21.919009   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:21.919216   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:21.919398   79788 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:51:21.919413   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetState
	I0717 01:51:21.920961   79788 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:51:21.920978   79788 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:51:21.920985   79788 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:51:21.920994   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:21.923918   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:21.924333   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:21.924358   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:21.924616   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:21.924819   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:21.925039   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:21.925218   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:21.925427   79788 main.go:141] libmachine: Using SSH client type: native
	I0717 01:51:21.925642   79788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0717 01:51:21.925653   79788 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:51:22.040617   79788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:51:22.040646   79788 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:51:22.040657   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:22.043425   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.043790   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:22.043827   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.044005   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:22.044194   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.044361   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.044486   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:22.044671   79788 main.go:141] libmachine: Using SSH client type: native
	I0717 01:51:22.044878   79788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0717 01:51:22.044890   79788 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:51:22.166433   79788 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:51:22.166495   79788 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:51:22.166502   79788 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:51:22.166509   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetMachineName
	I0717 01:51:22.166755   79788 buildroot.go:166] provisioning hostname "enable-default-cni-453036"
	I0717 01:51:22.166784   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetMachineName
	I0717 01:51:22.166994   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:22.169886   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.170279   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:22.170316   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.170474   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:22.170641   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.170809   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.170938   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:22.171107   79788 main.go:141] libmachine: Using SSH client type: native
	I0717 01:51:22.171329   79788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0717 01:51:22.171349   79788 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-453036 && echo "enable-default-cni-453036" | sudo tee /etc/hostname
	I0717 01:51:22.307811   79788 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-453036
	
	I0717 01:51:22.307841   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:22.310934   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.311400   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:22.311433   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.311670   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:22.311935   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.312114   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.312246   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:22.312408   79788 main.go:141] libmachine: Using SSH client type: native
	I0717 01:51:22.312644   79788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0717 01:51:22.312671   79788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-453036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-453036/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-453036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:51:22.439178   79788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:51:22.439209   79788 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:51:22.439230   79788 buildroot.go:174] setting up certificates
	I0717 01:51:22.439243   79788 provision.go:84] configureAuth start
	I0717 01:51:22.439254   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetMachineName
	I0717 01:51:22.439544   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetIP
	I0717 01:51:22.442568   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.442988   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:22.443016   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.443207   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:22.445820   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.446241   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:22.446267   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.446397   79788 provision.go:143] copyHostCerts
	I0717 01:51:22.446454   79788 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:51:22.446466   79788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:51:22.446537   79788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:51:22.446625   79788 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:51:22.446634   79788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:51:22.446658   79788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:51:22.446722   79788 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:51:22.446729   79788 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:51:22.446749   79788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:51:22.446820   79788 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-453036 san=[127.0.0.1 192.168.50.111 enable-default-cni-453036 localhost minikube]
	I0717 01:51:22.641938   79788 provision.go:177] copyRemoteCerts
	I0717 01:51:22.642038   79788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:51:22.642074   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:22.644870   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.645291   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:22.645323   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.645530   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:22.645747   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.645935   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:22.646139   79788 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/enable-default-cni-453036/id_rsa Username:docker}
	I0717 01:51:22.734653   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:51:22.761182   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 01:51:22.786090   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:51:22.809624   79788 provision.go:87] duration metric: took 370.368106ms to configureAuth
	I0717 01:51:22.809656   79788 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:51:22.809817   79788 config.go:182] Loaded profile config "enable-default-cni-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:51:22.809892   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:22.812658   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.813029   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:22.813056   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:22.813273   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:22.813476   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.813622   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:22.813768   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:22.813931   79788 main.go:141] libmachine: Using SSH client type: native
	I0717 01:51:22.814142   79788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0717 01:51:22.814181   79788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:51:23.373699   80566 start.go:364] duration metric: took 19.295417051s to acquireMachinesLock for "flannel-453036"
	I0717 01:51:23.373775   80566 start.go:93] Provisioning new machine with config: &{Name:flannel-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:51:23.373919   80566 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:51:23.375551   80566 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 01:51:23.375771   80566 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:51:23.375828   80566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:23.395733   80566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42613
	I0717 01:51:23.396341   80566 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:23.397012   80566 main.go:141] libmachine: Using API Version  1
	I0717 01:51:23.397041   80566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:23.397398   80566 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:23.397554   80566 main.go:141] libmachine: (flannel-453036) Calling .GetMachineName
	I0717 01:51:23.397685   80566 main.go:141] libmachine: (flannel-453036) Calling .DriverName
	I0717 01:51:23.397978   80566 start.go:159] libmachine.API.Create for "flannel-453036" (driver="kvm2")
	I0717 01:51:23.398008   80566 client.go:168] LocalClient.Create starting
	I0717 01:51:23.398045   80566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 01:51:23.398086   80566 main.go:141] libmachine: Decoding PEM data...
	I0717 01:51:23.398109   80566 main.go:141] libmachine: Parsing certificate...
	I0717 01:51:23.398174   80566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 01:51:23.398198   80566 main.go:141] libmachine: Decoding PEM data...
	I0717 01:51:23.398211   80566 main.go:141] libmachine: Parsing certificate...
	I0717 01:51:23.398234   80566 main.go:141] libmachine: Running pre-create checks...
	I0717 01:51:23.398245   80566 main.go:141] libmachine: (flannel-453036) Calling .PreCreateCheck
	I0717 01:51:23.398709   80566 main.go:141] libmachine: (flannel-453036) Calling .GetConfigRaw
	I0717 01:51:23.399198   80566 main.go:141] libmachine: Creating machine...
	I0717 01:51:23.399214   80566 main.go:141] libmachine: (flannel-453036) Calling .Create
	I0717 01:51:23.399374   80566 main.go:141] libmachine: (flannel-453036) Creating KVM machine...
	I0717 01:51:23.400638   80566 main.go:141] libmachine: (flannel-453036) DBG | found existing default KVM network
	I0717 01:51:23.402209   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:23.402007   80752 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:26:93} reservation:<nil>}
	I0717 01:51:23.403532   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:23.403410   80752 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:36:8f:94} reservation:<nil>}
	I0717 01:51:23.404922   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:23.404818   80752 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205d80}
	I0717 01:51:23.404943   80566 main.go:141] libmachine: (flannel-453036) DBG | created network xml: 
	I0717 01:51:23.404955   80566 main.go:141] libmachine: (flannel-453036) DBG | <network>
	I0717 01:51:23.404963   80566 main.go:141] libmachine: (flannel-453036) DBG |   <name>mk-flannel-453036</name>
	I0717 01:51:23.405026   80566 main.go:141] libmachine: (flannel-453036) DBG |   <dns enable='no'/>
	I0717 01:51:23.405042   80566 main.go:141] libmachine: (flannel-453036) DBG |   
	I0717 01:51:23.405058   80566 main.go:141] libmachine: (flannel-453036) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0717 01:51:23.405071   80566 main.go:141] libmachine: (flannel-453036) DBG |     <dhcp>
	I0717 01:51:23.405083   80566 main.go:141] libmachine: (flannel-453036) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0717 01:51:23.405090   80566 main.go:141] libmachine: (flannel-453036) DBG |     </dhcp>
	I0717 01:51:23.405099   80566 main.go:141] libmachine: (flannel-453036) DBG |   </ip>
	I0717 01:51:23.405105   80566 main.go:141] libmachine: (flannel-453036) DBG |   
	I0717 01:51:23.405113   80566 main.go:141] libmachine: (flannel-453036) DBG | </network>
	I0717 01:51:23.405118   80566 main.go:141] libmachine: (flannel-453036) DBG | 
	I0717 01:51:23.411039   80566 main.go:141] libmachine: (flannel-453036) DBG | trying to create private KVM network mk-flannel-453036 192.168.61.0/24...
	I0717 01:51:23.494827   80566 main.go:141] libmachine: (flannel-453036) DBG | private KVM network mk-flannel-453036 192.168.61.0/24 created
	I0717 01:51:23.494957   80566 main.go:141] libmachine: (flannel-453036) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036 ...
	I0717 01:51:23.494992   80566 main.go:141] libmachine: (flannel-453036) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 01:51:23.495006   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:23.494930   80752 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:51:23.495144   80566 main.go:141] libmachine: (flannel-453036) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 01:51:23.778783   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:23.778641   80752 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036/id_rsa...
	I0717 01:51:23.897873   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:23.897729   80752 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036/flannel-453036.rawdisk...
	I0717 01:51:23.897905   80566 main.go:141] libmachine: (flannel-453036) DBG | Writing magic tar header
	I0717 01:51:23.897922   80566 main.go:141] libmachine: (flannel-453036) DBG | Writing SSH key tar header
	I0717 01:51:23.898013   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:23.897919   80752 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036 ...
	I0717 01:51:23.898054   80566 main.go:141] libmachine: (flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036
	I0717 01:51:23.898136   80566 main.go:141] libmachine: (flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036 (perms=drwx------)
	I0717 01:51:23.898153   80566 main.go:141] libmachine: (flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 01:51:23.898175   80566 main.go:141] libmachine: (flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:51:23.898198   80566 main.go:141] libmachine: (flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 01:51:23.898213   80566 main.go:141] libmachine: (flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 01:51:23.898236   80566 main.go:141] libmachine: (flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:51:23.898246   80566 main.go:141] libmachine: (flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:51:23.898258   80566 main.go:141] libmachine: (flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 01:51:23.898267   80566 main.go:141] libmachine: (flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:51:23.898275   80566 main.go:141] libmachine: (flannel-453036) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:51:23.898286   80566 main.go:141] libmachine: (flannel-453036) Creating domain...
	I0717 01:51:23.898293   80566 main.go:141] libmachine: (flannel-453036) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:51:23.898298   80566 main.go:141] libmachine: (flannel-453036) DBG | Checking permissions on dir: /home
	I0717 01:51:23.898304   80566 main.go:141] libmachine: (flannel-453036) DBG | Skipping /home - not owner
	I0717 01:51:23.899577   80566 main.go:141] libmachine: (flannel-453036) define libvirt domain using xml: 
	I0717 01:51:23.899601   80566 main.go:141] libmachine: (flannel-453036) <domain type='kvm'>
	I0717 01:51:23.899612   80566 main.go:141] libmachine: (flannel-453036)   <name>flannel-453036</name>
	I0717 01:51:23.899624   80566 main.go:141] libmachine: (flannel-453036)   <memory unit='MiB'>3072</memory>
	I0717 01:51:23.899633   80566 main.go:141] libmachine: (flannel-453036)   <vcpu>2</vcpu>
	I0717 01:51:23.899638   80566 main.go:141] libmachine: (flannel-453036)   <features>
	I0717 01:51:23.899654   80566 main.go:141] libmachine: (flannel-453036)     <acpi/>
	I0717 01:51:23.899661   80566 main.go:141] libmachine: (flannel-453036)     <apic/>
	I0717 01:51:23.899682   80566 main.go:141] libmachine: (flannel-453036)     <pae/>
	I0717 01:51:23.899693   80566 main.go:141] libmachine: (flannel-453036)     
	I0717 01:51:23.899702   80566 main.go:141] libmachine: (flannel-453036)   </features>
	I0717 01:51:23.899717   80566 main.go:141] libmachine: (flannel-453036)   <cpu mode='host-passthrough'>
	I0717 01:51:23.899727   80566 main.go:141] libmachine: (flannel-453036)   
	I0717 01:51:23.899734   80566 main.go:141] libmachine: (flannel-453036)   </cpu>
	I0717 01:51:23.899746   80566 main.go:141] libmachine: (flannel-453036)   <os>
	I0717 01:51:23.899754   80566 main.go:141] libmachine: (flannel-453036)     <type>hvm</type>
	I0717 01:51:23.899766   80566 main.go:141] libmachine: (flannel-453036)     <boot dev='cdrom'/>
	I0717 01:51:23.899776   80566 main.go:141] libmachine: (flannel-453036)     <boot dev='hd'/>
	I0717 01:51:23.899786   80566 main.go:141] libmachine: (flannel-453036)     <bootmenu enable='no'/>
	I0717 01:51:23.899800   80566 main.go:141] libmachine: (flannel-453036)   </os>
	I0717 01:51:23.899811   80566 main.go:141] libmachine: (flannel-453036)   <devices>
	I0717 01:51:23.899820   80566 main.go:141] libmachine: (flannel-453036)     <disk type='file' device='cdrom'>
	I0717 01:51:23.899838   80566 main.go:141] libmachine: (flannel-453036)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036/boot2docker.iso'/>
	I0717 01:51:23.899848   80566 main.go:141] libmachine: (flannel-453036)       <target dev='hdc' bus='scsi'/>
	I0717 01:51:23.899860   80566 main.go:141] libmachine: (flannel-453036)       <readonly/>
	I0717 01:51:23.899874   80566 main.go:141] libmachine: (flannel-453036)     </disk>
	I0717 01:51:23.899887   80566 main.go:141] libmachine: (flannel-453036)     <disk type='file' device='disk'>
	I0717 01:51:23.899900   80566 main.go:141] libmachine: (flannel-453036)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:51:23.899915   80566 main.go:141] libmachine: (flannel-453036)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036/flannel-453036.rawdisk'/>
	I0717 01:51:23.899926   80566 main.go:141] libmachine: (flannel-453036)       <target dev='hda' bus='virtio'/>
	I0717 01:51:23.899938   80566 main.go:141] libmachine: (flannel-453036)     </disk>
	I0717 01:51:23.899957   80566 main.go:141] libmachine: (flannel-453036)     <interface type='network'>
	I0717 01:51:23.899969   80566 main.go:141] libmachine: (flannel-453036)       <source network='mk-flannel-453036'/>
	I0717 01:51:23.899985   80566 main.go:141] libmachine: (flannel-453036)       <model type='virtio'/>
	I0717 01:51:23.899996   80566 main.go:141] libmachine: (flannel-453036)     </interface>
	I0717 01:51:23.900004   80566 main.go:141] libmachine: (flannel-453036)     <interface type='network'>
	I0717 01:51:23.900016   80566 main.go:141] libmachine: (flannel-453036)       <source network='default'/>
	I0717 01:51:23.900027   80566 main.go:141] libmachine: (flannel-453036)       <model type='virtio'/>
	I0717 01:51:23.900037   80566 main.go:141] libmachine: (flannel-453036)     </interface>
	I0717 01:51:23.900048   80566 main.go:141] libmachine: (flannel-453036)     <serial type='pty'>
	I0717 01:51:23.900099   80566 main.go:141] libmachine: (flannel-453036)       <target port='0'/>
	I0717 01:51:23.900131   80566 main.go:141] libmachine: (flannel-453036)     </serial>
	I0717 01:51:23.900152   80566 main.go:141] libmachine: (flannel-453036)     <console type='pty'>
	I0717 01:51:23.900172   80566 main.go:141] libmachine: (flannel-453036)       <target type='serial' port='0'/>
	I0717 01:51:23.900182   80566 main.go:141] libmachine: (flannel-453036)     </console>
	I0717 01:51:23.900189   80566 main.go:141] libmachine: (flannel-453036)     <rng model='virtio'>
	I0717 01:51:23.900206   80566 main.go:141] libmachine: (flannel-453036)       <backend model='random'>/dev/random</backend>
	I0717 01:51:23.900221   80566 main.go:141] libmachine: (flannel-453036)     </rng>
	I0717 01:51:23.900241   80566 main.go:141] libmachine: (flannel-453036)     
	I0717 01:51:23.900255   80566 main.go:141] libmachine: (flannel-453036)     
	I0717 01:51:23.900267   80566 main.go:141] libmachine: (flannel-453036)   </devices>
	I0717 01:51:23.900276   80566 main.go:141] libmachine: (flannel-453036) </domain>
	I0717 01:51:23.900286   80566 main.go:141] libmachine: (flannel-453036) 
	I0717 01:51:23.113371   79788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:51:23.113400   79788 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:51:23.113410   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetURL
	I0717 01:51:23.114914   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | Using libvirt version 6000000
	I0717 01:51:23.117748   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.118165   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:23.118204   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.118453   79788 main.go:141] libmachine: Docker is up and running!
	I0717 01:51:23.118472   79788 main.go:141] libmachine: Reticulating splines...
	I0717 01:51:23.118481   79788 client.go:171] duration metric: took 25.020244609s to LocalClient.Create
	I0717 01:51:23.118509   79788 start.go:167] duration metric: took 25.020321268s to libmachine.API.Create "enable-default-cni-453036"
	I0717 01:51:23.118521   79788 start.go:293] postStartSetup for "enable-default-cni-453036" (driver="kvm2")
	I0717 01:51:23.118534   79788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:51:23.118555   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:23.118789   79788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:51:23.118821   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:23.121117   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.121496   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:23.121520   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.121702   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:23.121859   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:23.122030   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:23.122207   79788 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/enable-default-cni-453036/id_rsa Username:docker}
	I0717 01:51:23.209488   79788 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:51:23.213799   79788 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:51:23.213823   79788 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:51:23.213882   79788 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:51:23.213969   79788 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:51:23.214096   79788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:51:23.223940   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:51:23.248615   79788 start.go:296] duration metric: took 130.082172ms for postStartSetup
	I0717 01:51:23.248658   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetConfigRaw
	I0717 01:51:23.249212   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetIP
	I0717 01:51:23.252189   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.252631   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:23.252658   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.252929   79788 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/config.json ...
	I0717 01:51:23.253739   79788 start.go:128] duration metric: took 25.178862305s to createHost
	I0717 01:51:23.253766   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:23.256282   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.256649   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:23.256676   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.256831   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:23.257028   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:23.257243   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:23.257393   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:23.257551   79788 main.go:141] libmachine: Using SSH client type: native
	I0717 01:51:23.257757   79788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.111 22 <nil> <nil>}
	I0717 01:51:23.257771   79788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:51:23.373528   79788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181083.348296139
	
	I0717 01:51:23.373559   79788 fix.go:216] guest clock: 1721181083.348296139
	I0717 01:51:23.373568   79788 fix.go:229] Guest: 2024-07-17 01:51:23.348296139 +0000 UTC Remote: 2024-07-17 01:51:23.253753274 +0000 UTC m=+25.296719234 (delta=94.542865ms)
	I0717 01:51:23.373602   79788 fix.go:200] guest clock delta is within tolerance: 94.542865ms
	I0717 01:51:23.373609   79788 start.go:83] releasing machines lock for "enable-default-cni-453036", held for 25.298845768s
	I0717 01:51:23.373637   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:23.373956   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetIP
	I0717 01:51:23.377168   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.377602   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:23.377635   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.377771   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:23.378316   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:23.378518   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:23.378613   79788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:51:23.378664   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:23.378725   79788 ssh_runner.go:195] Run: cat /version.json
	I0717 01:51:23.378795   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:23.381565   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.381767   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.381936   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:23.381969   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.382123   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:23.382200   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:23.382234   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:23.382262   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:23.382389   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:23.382500   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:23.382526   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:23.382638   79788 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/enable-default-cni-453036/id_rsa Username:docker}
	I0717 01:51:23.382687   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:23.382821   79788 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/enable-default-cni-453036/id_rsa Username:docker}
	I0717 01:51:23.495686   79788 ssh_runner.go:195] Run: systemctl --version
	I0717 01:51:23.502740   79788 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:51:23.665422   79788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:51:23.673085   79788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:51:23.673166   79788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:51:23.693764   79788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:51:23.693789   79788 start.go:495] detecting cgroup driver to use...
	I0717 01:51:23.693866   79788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:51:23.713232   79788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:51:23.730551   79788 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:51:23.730607   79788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:51:23.745715   79788 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:51:23.761950   79788 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:51:23.901266   79788 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:51:24.062726   79788 docker.go:233] disabling docker service ...
	I0717 01:51:24.062797   79788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:51:24.078479   79788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:51:24.092618   79788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:51:24.248042   79788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:51:24.395348   79788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:51:24.414681   79788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:51:24.436873   79788 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:51:24.436942   79788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:51:24.448194   79788 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:51:24.448262   79788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:51:24.460811   79788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:51:24.472243   79788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:51:24.482904   79788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:51:24.493874   79788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:51:24.504337   79788 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:51:24.521751   79788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:51:24.533556   79788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:51:24.543922   79788 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:51:24.544023   79788 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:51:24.559756   79788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:51:24.571139   79788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:51:24.699001   79788 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:51:24.844467   79788 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:51:24.844540   79788 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:51:24.849588   79788 start.go:563] Will wait 60s for crictl version
	I0717 01:51:24.849639   79788 ssh_runner.go:195] Run: which crictl
	I0717 01:51:24.853712   79788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:51:24.897037   79788 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:51:24.897146   79788 ssh_runner.go:195] Run: crio --version
	I0717 01:51:24.929108   79788 ssh_runner.go:195] Run: crio --version
	I0717 01:51:24.962328   79788 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:51:24.963622   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetIP
	I0717 01:51:24.966450   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:24.966855   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:24.966887   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:24.967136   79788 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:51:24.971525   79788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:51:24.985478   79788 kubeadm.go:883] updating cluster {Name:enable-default-cni-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:enable-default-cni-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.111 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:51:24.985570   79788 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:51:24.985636   79788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:51:25.023323   79788 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:51:25.023408   79788 ssh_runner.go:195] Run: which lz4
	I0717 01:51:25.027899   79788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:51:25.033425   79788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:51:25.033454   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:51:26.512231   79788 crio.go:462] duration metric: took 1.484362817s to copy over tarball
	I0717 01:51:26.512307   79788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:51:24.028174   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:f9:a9:02 in network default
	I0717 01:51:24.028925   80566 main.go:141] libmachine: (flannel-453036) Ensuring networks are active...
	I0717 01:51:24.028954   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:24.029667   80566 main.go:141] libmachine: (flannel-453036) Ensuring network default is active
	I0717 01:51:24.029994   80566 main.go:141] libmachine: (flannel-453036) Ensuring network mk-flannel-453036 is active
	I0717 01:51:24.030794   80566 main.go:141] libmachine: (flannel-453036) Getting domain xml...
	I0717 01:51:24.031727   80566 main.go:141] libmachine: (flannel-453036) Creating domain...
	I0717 01:51:25.443109   80566 main.go:141] libmachine: (flannel-453036) Waiting to get IP...
	I0717 01:51:25.444011   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:25.444578   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:25.444611   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:25.444550   80752 retry.go:31] will retry after 253.451982ms: waiting for machine to come up
	I0717 01:51:25.700209   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:25.700900   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:25.700925   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:25.700850   80752 retry.go:31] will retry after 318.787117ms: waiting for machine to come up
	I0717 01:51:26.021637   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:26.022214   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:26.022245   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:26.022155   80752 retry.go:31] will retry after 398.139163ms: waiting for machine to come up
	I0717 01:51:26.421912   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:26.422661   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:26.422693   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:26.422613   80752 retry.go:31] will retry after 504.422225ms: waiting for machine to come up
	I0717 01:51:26.928177   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:26.928958   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:26.928986   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:26.928939   80752 retry.go:31] will retry after 495.18466ms: waiting for machine to come up
	I0717 01:51:27.426211   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:27.426911   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:27.426938   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:27.426772   80752 retry.go:31] will retry after 607.929224ms: waiting for machine to come up
	I0717 01:51:28.036858   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:28.037505   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:28.037531   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:28.037447   80752 retry.go:31] will retry after 1.132863115s: waiting for machine to come up
	I0717 01:51:28.906195   79788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.393855113s)
	I0717 01:51:28.906231   79788 crio.go:469] duration metric: took 2.393973607s to extract the tarball
	I0717 01:51:28.906242   79788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:51:28.945179   79788 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:51:28.989369   79788 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:51:28.989389   79788 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:51:28.989397   79788 kubeadm.go:934] updating node { 192.168.50.111 8443 v1.30.2 crio true true} ...
	I0717 01:51:28.989509   79788 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-453036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0717 01:51:28.989590   79788 ssh_runner.go:195] Run: crio config
	I0717 01:51:29.042633   79788 cni.go:84] Creating CNI manager for "bridge"
	I0717 01:51:29.042652   79788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:51:29.042673   79788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.111 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-453036 NodeName:enable-default-cni-453036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:51:29.042800   79788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-453036"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:51:29.042856   79788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:51:29.053462   79788 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:51:29.053547   79788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:51:29.063746   79788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0717 01:51:29.081575   79788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:51:29.100186   79788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0717 01:51:29.118334   79788 ssh_runner.go:195] Run: grep 192.168.50.111	control-plane.minikube.internal$ /etc/hosts
	I0717 01:51:29.122429   79788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:51:29.134586   79788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:51:29.262278   79788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:51:29.279113   79788 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036 for IP: 192.168.50.111
	I0717 01:51:29.279142   79788 certs.go:194] generating shared ca certs ...
	I0717 01:51:29.279160   79788 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:29.279342   79788 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:51:29.279396   79788 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:51:29.279410   79788 certs.go:256] generating profile certs ...
	I0717 01:51:29.279484   79788 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.key
	I0717 01:51:29.279500   79788 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt with IP's: []
	I0717 01:51:29.358176   79788 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt ...
	I0717 01:51:29.358203   79788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: {Name:mk426232fee2ecda42a0e3e69544fcc8cde02983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:29.358385   79788 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.key ...
	I0717 01:51:29.358400   79788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.key: {Name:mk8a3994a48d2b313048705d0440bb8a9b72e6ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:29.358493   79788 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.key.857badd4
	I0717 01:51:29.358514   79788 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.crt.857badd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.111]
	I0717 01:51:29.608486   79788 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.crt.857badd4 ...
	I0717 01:51:29.608511   79788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.crt.857badd4: {Name:mk4b3ab6684866c6a92f7806e769ebbeac2da98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:29.608675   79788 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.key.857badd4 ...
	I0717 01:51:29.608688   79788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.key.857badd4: {Name:mk8564bd163ba21be8943abc8f491143d1a5c56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:29.608759   79788 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.crt.857badd4 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.crt
	I0717 01:51:29.608826   79788 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.key.857badd4 -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.key
	I0717 01:51:29.608874   79788 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/proxy-client.key
	I0717 01:51:29.608893   79788 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/proxy-client.crt with IP's: []
	I0717 01:51:29.767336   79788 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/proxy-client.crt ...
	I0717 01:51:29.767362   79788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/proxy-client.crt: {Name:mk2e8f014029e599caba3799d053b42096c9d7ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:29.767548   79788 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/proxy-client.key ...
	I0717 01:51:29.767569   79788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/proxy-client.key: {Name:mk1e9913a733a27443e722bfb92d4c791a6982b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:29.767792   79788 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:51:29.767840   79788 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:51:29.767863   79788 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:51:29.767899   79788 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:51:29.767932   79788 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:51:29.767962   79788 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:51:29.768018   79788 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:51:29.768679   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:51:29.797563   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:51:29.824920   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:51:29.849995   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:51:29.878135   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 01:51:29.905173   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:51:29.931353   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:51:29.969262   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:51:30.011486   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:51:30.038692   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:51:30.063793   79788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:51:30.089031   79788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:51:30.108410   79788 ssh_runner.go:195] Run: openssl version
	I0717 01:51:30.114760   79788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:51:30.126714   79788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:51:30.131694   79788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:51:30.131746   79788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:51:30.138012   79788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:51:30.150213   79788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:51:30.162579   79788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:51:30.167274   79788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:51:30.167333   79788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:51:30.173593   79788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:51:30.187996   79788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:51:30.200235   79788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:51:30.204916   79788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:51:30.204973   79788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:51:30.210817   79788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:51:30.223669   79788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:51:30.228258   79788 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:51:30.228329   79788 kubeadm.go:392] StartCluster: {Name:enable-default-cni-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.2 ClusterName:enable-default-cni-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.111 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:30.228417   79788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:51:30.228470   79788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:51:30.270112   79788 cri.go:89] found id: ""
	I0717 01:51:30.270199   79788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:51:30.281997   79788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:51:30.291748   79788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:51:30.301310   79788 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:51:30.301330   79788 kubeadm.go:157] found existing configuration files:
	
	I0717 01:51:30.301375   79788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:51:30.310856   79788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:51:30.310911   79788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:51:30.322428   79788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:51:30.333574   79788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:51:30.333636   79788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:51:30.344236   79788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:51:30.354091   79788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:51:30.354151   79788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:51:30.365581   79788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:51:30.375327   79788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:51:30.375375   79788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:51:30.385699   79788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:51:30.586192   79788 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:51:29.171424   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:29.172059   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:29.172083   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:29.172033   80752 retry.go:31] will retry after 1.182583095s: waiting for machine to come up
	I0717 01:51:30.355907   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:30.356435   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:30.356465   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:30.356384   80752 retry.go:31] will retry after 1.309168346s: waiting for machine to come up
	I0717 01:51:31.666896   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:31.667573   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:31.667603   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:31.667529   80752 retry.go:31] will retry after 1.778811964s: waiting for machine to come up
	I0717 01:51:33.448236   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:33.448776   80566 main.go:141] libmachine: (flannel-453036) DBG | unable to find current IP address of domain flannel-453036 in network mk-flannel-453036
	I0717 01:51:33.448803   80566 main.go:141] libmachine: (flannel-453036) DBG | I0717 01:51:33.448731   80752 retry.go:31] will retry after 2.906546953s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.002325327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181097002295594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f4341e5-4977-42d0-9f28-e7ab4c6bd3dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.002896657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f58f527c-06c9-43d1-b704-99d539a79844 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.002967586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f58f527c-06c9-43d1-b704-99d539a79844 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.003264796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721180319723301962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a81affed33178408da2f628642aa7edf3db0831f9a2ca3ccdf06466c131b6b0,PodSandboxId:0753a23b624dfebe5e28d2d417d277c4d28d267e72fb0ee392b128d4d6ae3903,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721180297512914016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1ff7c10-e7aa-4724-afff-9ec2e8657e90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902,PodSandboxId:a080b45de4fc043a6f72102bf260287dc04b127b5dca009791f732a8921f3549,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180296604158470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb91980f-dca7-4dd0-902e-7d1ffac4e1b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180288948147800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571,PodSandboxId:eaac9b90282922f6488de55f788e2bfdbe4c74fccc64678df73dfedf1d3bfd2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721180288904184621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xjgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ab1bff-5791-464d-98a0-041c53c472
34,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc,PodSandboxId:3c2fcb01cef6efaed71ddd2ad0846150979ab49b21a4e382fe48ad08b0cd370f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180284215176238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c65e59014846c76fb9e094d3e44300,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf,PodSandboxId:5d740ec6d82b24619039e83ba0a8a4aa79061c8f59859a7b6fefe4ac00aea3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180284216809145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdac3dcce3429ded2529e5ce29ecbb9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2,PodSandboxId:a2ca2343586d5d0bf54c3f1e2a28f5fa59c0e092e423ba272692822c1ec140bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180284190332833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838a7a6ab42ee7a7484c41d69e5ba22c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4d
a08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e,PodSandboxId:faf56bfdc6714484aed8a106865cee9dc8bc051927831e4faf2dad898f854fdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180284111408114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce7f8e6ea3c381a1e21f86060e22a334,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f58f527c-06c9-43d1-b704-99d539a79844 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.043703932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b1eb46a-b04a-4116-b38a-2baa728ba5e4 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.043802887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b1eb46a-b04a-4116-b38a-2baa728ba5e4 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.045808729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0200f252-a2d5-4e1e-9e1a-ac5aaf08fe72 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.046201640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181097046175250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0200f252-a2d5-4e1e-9e1a-ac5aaf08fe72 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.047097859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0437e58-760a-499a-adcb-0e7a4c67eedc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.047156170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0437e58-760a-499a-adcb-0e7a4c67eedc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.047351433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721180319723301962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a81affed33178408da2f628642aa7edf3db0831f9a2ca3ccdf06466c131b6b0,PodSandboxId:0753a23b624dfebe5e28d2d417d277c4d28d267e72fb0ee392b128d4d6ae3903,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721180297512914016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1ff7c10-e7aa-4724-afff-9ec2e8657e90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902,PodSandboxId:a080b45de4fc043a6f72102bf260287dc04b127b5dca009791f732a8921f3549,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180296604158470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb91980f-dca7-4dd0-902e-7d1ffac4e1b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180288948147800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571,PodSandboxId:eaac9b90282922f6488de55f788e2bfdbe4c74fccc64678df73dfedf1d3bfd2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721180288904184621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xjgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ab1bff-5791-464d-98a0-041c53c472
34,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc,PodSandboxId:3c2fcb01cef6efaed71ddd2ad0846150979ab49b21a4e382fe48ad08b0cd370f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180284215176238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c65e59014846c76fb9e094d3e44300,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf,PodSandboxId:5d740ec6d82b24619039e83ba0a8a4aa79061c8f59859a7b6fefe4ac00aea3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180284216809145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdac3dcce3429ded2529e5ce29ecbb9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2,PodSandboxId:a2ca2343586d5d0bf54c3f1e2a28f5fa59c0e092e423ba272692822c1ec140bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180284190332833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838a7a6ab42ee7a7484c41d69e5ba22c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4d
a08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e,PodSandboxId:faf56bfdc6714484aed8a106865cee9dc8bc051927831e4faf2dad898f854fdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180284111408114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce7f8e6ea3c381a1e21f86060e22a334,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0437e58-760a-499a-adcb-0e7a4c67eedc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.092682463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dad62537-a27b-43e9-b14d-f0c97cebcf83 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.092754356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dad62537-a27b-43e9-b14d-f0c97cebcf83 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.094840783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ba1eec7-ae9f-42aa-8962-b9f9748c44e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.095385748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181097095358307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ba1eec7-ae9f-42aa-8962-b9f9748c44e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.096038866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a1c2bb7-8df2-47f2-b33f-3fd9b08de3a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.096115691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a1c2bb7-8df2-47f2-b33f-3fd9b08de3a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.096302159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721180319723301962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a81affed33178408da2f628642aa7edf3db0831f9a2ca3ccdf06466c131b6b0,PodSandboxId:0753a23b624dfebe5e28d2d417d277c4d28d267e72fb0ee392b128d4d6ae3903,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721180297512914016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1ff7c10-e7aa-4724-afff-9ec2e8657e90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902,PodSandboxId:a080b45de4fc043a6f72102bf260287dc04b127b5dca009791f732a8921f3549,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180296604158470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb91980f-dca7-4dd0-902e-7d1ffac4e1b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180288948147800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571,PodSandboxId:eaac9b90282922f6488de55f788e2bfdbe4c74fccc64678df73dfedf1d3bfd2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721180288904184621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xjgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ab1bff-5791-464d-98a0-041c53c472
34,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc,PodSandboxId:3c2fcb01cef6efaed71ddd2ad0846150979ab49b21a4e382fe48ad08b0cd370f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180284215176238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c65e59014846c76fb9e094d3e44300,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf,PodSandboxId:5d740ec6d82b24619039e83ba0a8a4aa79061c8f59859a7b6fefe4ac00aea3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180284216809145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdac3dcce3429ded2529e5ce29ecbb9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2,PodSandboxId:a2ca2343586d5d0bf54c3f1e2a28f5fa59c0e092e423ba272692822c1ec140bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180284190332833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838a7a6ab42ee7a7484c41d69e5ba22c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4d
a08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e,PodSandboxId:faf56bfdc6714484aed8a106865cee9dc8bc051927831e4faf2dad898f854fdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180284111408114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce7f8e6ea3c381a1e21f86060e22a334,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a1c2bb7-8df2-47f2-b33f-3fd9b08de3a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.132699868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1dc2af4-3fd6-4de6-a2d9-ca40d7f8c64f name=/runtime.v1.RuntimeService/Version
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.132821946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1dc2af4-3fd6-4de6-a2d9-ca40d7f8c64f name=/runtime.v1.RuntimeService/Version
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.134508294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7354284-f0e0-49b4-a8f3-84633d874abc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.135075442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181097135043506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7354284-f0e0-49b4-a8f3-84633d874abc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.135714469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f504de87-3a68-4076-aa28-1ed04bb96a0e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.135805018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f504de87-3a68-4076-aa28-1ed04bb96a0e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:51:37 no-preload-818382 crio[729]: time="2024-07-17 01:51:37.136193973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721180319723301962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a81affed33178408da2f628642aa7edf3db0831f9a2ca3ccdf06466c131b6b0,PodSandboxId:0753a23b624dfebe5e28d2d417d277c4d28d267e72fb0ee392b128d4d6ae3903,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721180297512914016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1ff7c10-e7aa-4724-afff-9ec2e8657e90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902,PodSandboxId:a080b45de4fc043a6f72102bf260287dc04b127b5dca009791f732a8921f3549,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180296604158470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb91980f-dca7-4dd0-902e-7d1ffac4e1b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180288948147800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571,PodSandboxId:eaac9b90282922f6488de55f788e2bfdbe4c74fccc64678df73dfedf1d3bfd2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721180288904184621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xjgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ab1bff-5791-464d-98a0-041c53c472
34,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc,PodSandboxId:3c2fcb01cef6efaed71ddd2ad0846150979ab49b21a4e382fe48ad08b0cd370f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180284215176238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c65e59014846c76fb9e094d3e44300,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf,PodSandboxId:5d740ec6d82b24619039e83ba0a8a4aa79061c8f59859a7b6fefe4ac00aea3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180284216809145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdac3dcce3429ded2529e5ce29ecbb9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2,PodSandboxId:a2ca2343586d5d0bf54c3f1e2a28f5fa59c0e092e423ba272692822c1ec140bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180284190332833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838a7a6ab42ee7a7484c41d69e5ba22c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4d
a08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e,PodSandboxId:faf56bfdc6714484aed8a106865cee9dc8bc051927831e4faf2dad898f854fdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180284111408114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce7f8e6ea3c381a1e21f86060e22a334,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f504de87-3a68-4076-aa28-1ed04bb96a0e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da9966ff36be8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   9dfeec5263456       storage-provisioner
	0a81affed3317       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   0753a23b624df       busybox
	e8dda478edb70       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   a080b45de4fc0       coredns-5cfdc65f69-rzhfk
	b36943f541e1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   9dfeec5263456       storage-provisioner
	98b3c4a1f8778       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      13 minutes ago      Running             kube-proxy                1                   eaac9b9028292       kube-proxy-7xjgl
	0e68107fbc903       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      13 minutes ago      Running             etcd                      1                   5d740ec6d82b2       etcd-no-preload-818382
	b7e8dfc9eddb7       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      13 minutes ago      Running             kube-scheduler            1                   3c2fcb01cef6e       kube-scheduler-no-preload-818382
	8b3944e69af1a       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      13 minutes ago      Running             kube-apiserver            1                   a2ca2343586d5       kube-apiserver-no-preload-818382
	7a78373ef3f84       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      13 minutes ago      Running             kube-controller-manager   1                   faf56bfdc6714       kube-controller-manager-no-preload-818382
	
	
	==> coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47653 - 208 "HINFO IN 3214131708330472645.7751523909791762612. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.088966684s
	
	
	==> describe nodes <==
	Name:               no-preload-818382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-818382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=no-preload-818382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_29_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-818382
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:51:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:48:50 +0000   Wed, 17 Jul 2024 01:29:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:48:50 +0000   Wed, 17 Jul 2024 01:29:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:48:50 +0000   Wed, 17 Jul 2024 01:29:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:48:50 +0000   Wed, 17 Jul 2024 01:38:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    no-preload-818382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fdd83e880e04146b6b0130198304011
	  System UUID:                1fdd83e8-80e0-4146-b6b0-130198304011
	  Boot ID:                    14bdd5e4-b055-48d3-aff1-025d69cecc8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5cfdc65f69-rzhfk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-818382                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-818382             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-818382    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-7xjgl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-818382             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-78fcd8795b-vgkwg              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-818382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-818382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-818382 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node no-preload-818382 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-818382 event: Registered Node no-preload-818382 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-818382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-818382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-818382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-818382 event: Registered Node no-preload-818382 in Controller
	
	
	==> dmesg <==
	[Jul17 01:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050139] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040267] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.568491] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.394781] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.592043] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.617968] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059040] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.189107] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.117532] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.270674] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[Jul17 01:38] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.059678] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.054541] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +4.097065] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.492435] systemd-fstab-generator[1934]: Ignoring "noauto" option for root device
	[  +1.544137] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.249732] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] <==
	{"level":"info","ts":"2024-07-17T01:38:06.241579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgVoteResp from 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2024-07-17T01:38:06.241588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:38:06.241595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38b26e584d45e0da elected leader 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2024-07-17T01:38:06.257227Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"38b26e584d45e0da","local-member-attributes":"{Name:no-preload-818382 ClientURLs:[https://192.168.39.38:2379]}","request-path":"/0/members/38b26e584d45e0da/attributes","cluster-id":"afb1a6a08b4dab74","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:38:06.257247Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:38:06.257429Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:38:06.257806Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:38:06.257868Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:38:06.258741Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:38:06.258747Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:38:06.259726Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2024-07-17T01:38:06.259824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-17T01:45:00.411262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.421947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:45:00.412191Z","caller":"traceutil/trace.go:171","msg":"trace[1962081311] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:915; }","duration":"369.380681ms","start":"2024-07-17T01:45:00.042743Z","end":"2024-07-17T01:45:00.412123Z","steps":["trace[1962081311] 'range keys from in-memory index tree'  (duration: 368.299375ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:45:00.412336Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:45:00.04271Z","time spent":"369.597603ms","remote":"127.0.0.1:53366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-07-17T01:45:46.947818Z","caller":"traceutil/trace.go:171","msg":"trace[1415028627] transaction","detail":"{read_only:false; response_revision:952; number_of_response:1; }","duration":"110.71159ms","start":"2024-07-17T01:45:46.83678Z","end":"2024-07-17T01:45:46.947491Z","steps":["trace[1415028627] 'process raft request'  (duration: 110.548931ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:48:06.284892Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":820}
	{"level":"info","ts":"2024-07-17T01:48:06.295074Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":820,"took":"9.837231ms","hash":1763417678,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2359296,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T01:48:06.295162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1763417678,"revision":820,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:49:38.61617Z","caller":"traceutil/trace.go:171","msg":"trace[2063164074] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"106.237509ms","start":"2024-07-17T01:49:38.509883Z","end":"2024-07-17T01:49:38.61612Z","steps":["trace[2063164074] 'process raft request'  (duration: 106.116572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:49:39.154591Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.7486ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16202421756594917764 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.38\" mod_revision:1131 > success:<request_put:<key:\"/registry/masterleases/192.168.39.38\" value_size:66 lease:6979049719740141953 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.38\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T01:49:39.154695Z","caller":"traceutil/trace.go:171","msg":"trace[717214407] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"119.33112ms","start":"2024-07-17T01:49:39.035351Z","end":"2024-07-17T01:49:39.154682Z","steps":["trace[717214407] 'process raft request'  (duration: 10.293038ms)","trace[717214407] 'compare'  (duration: 107.645022ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T01:50:39.113829Z","caller":"traceutil/trace.go:171","msg":"trace[701685871] linearizableReadLoop","detail":"{readStateIndex:1375; appliedIndex:1374; }","duration":"104.479596ms","start":"2024-07-17T01:50:39.00931Z","end":"2024-07-17T01:50:39.113789Z","steps":["trace[701685871] 'read index received'  (duration: 104.27682ms)","trace[701685871] 'applied index is now lower than readState.Index'  (duration: 201.751µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:50:39.1141Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.736394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:50:39.114135Z","caller":"traceutil/trace.go:171","msg":"trace[1589320175] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1187; }","duration":"104.820899ms","start":"2024-07-17T01:50:39.009305Z","end":"2024-07-17T01:50:39.114125Z","steps":["trace[1589320175] 'agreement among raft nodes before linearized reading'  (duration: 104.708165ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:51:37 up 14 min,  0 users,  load average: 0.38, 0.35, 0.22
	Linux no-preload-818382 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] <==
	W0717 01:48:08.671276       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:48:08.671351       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 01:48:08.672410       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 01:48:08.672422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:49:08.673598       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:49:08.673774       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0717 01:49:08.673914       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:49:08.673967       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0717 01:49:08.675021       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 01:49:08.675044       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:51:08.675580       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:51:08.675690       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0717 01:51:08.675946       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:51:08.676113       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 01:51:08.676891       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 01:51:08.678073       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] <==
	E0717 01:46:12.298967       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:46:12.364355       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:46:42.307766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:46:42.378957       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:47:12.316156       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:47:12.386958       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:47:42.324425       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:47:42.396459       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:48:12.331611       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:48:12.404158       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:48:42.338506       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:48:42.414928       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:48:50.038202       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-818382"
	E0717 01:49:12.345290       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:49:12.427226       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:49:23.521935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="807.228µs"
	I0717 01:49:38.619443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="131.966µs"
	E0717 01:49:42.351246       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:49:42.435615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:50:12.357629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:50:12.445336       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:50:42.365288       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:50:42.462118       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:51:12.373707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:51:12.477318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 01:38:09.244386       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 01:38:09.263258       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.38"]
	E0717 01:38:09.263493       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 01:38:09.342217       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 01:38:09.342297       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:38:09.342351       1 server_linux.go:170] "Using iptables Proxier"
	I0717 01:38:09.345127       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 01:38:09.345474       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 01:38:09.345498       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:38:09.347087       1 config.go:197] "Starting service config controller"
	I0717 01:38:09.347126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:38:09.347147       1 config.go:104] "Starting endpoint slice config controller"
	I0717 01:38:09.347152       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:38:09.348364       1 config.go:326] "Starting node config controller"
	I0717 01:38:09.348394       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:38:09.448271       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:38:09.448622       1 shared_informer.go:320] Caches are synced for node config
	I0717 01:38:09.448337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] <==
	I0717 01:38:05.445112       1 serving.go:386] Generated self-signed cert in-memory
	W0717 01:38:07.605727       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:38:07.605926       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:38:07.605959       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:38:07.606036       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:38:07.667669       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0717 01:38:07.667715       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:38:07.671434       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:38:07.671606       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:38:07.671645       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:38:07.671669       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0717 01:38:07.771947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:49:03 no-preload-818382 kubelet[1311]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:49:11 no-preload-818382 kubelet[1311]: E0717 01:49:11.523937    1311 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 01:49:11 no-preload-818382 kubelet[1311]: E0717 01:49:11.524097    1311 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 01:49:11 no-preload-818382 kubelet[1311]: E0717 01:49:11.524382    1311 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vbrfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-78fcd8795b-vgkwg_kube-system(6386b732-76a6-4744-9215-e4764e08e4e5): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jul 17 01:49:11 no-preload-818382 kubelet[1311]: E0717 01:49:11.525901    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:49:23 no-preload-818382 kubelet[1311]: E0717 01:49:23.502027    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:49:38 no-preload-818382 kubelet[1311]: E0717 01:49:38.501915    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:49:53 no-preload-818382 kubelet[1311]: E0717 01:49:53.502991    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:50:03 no-preload-818382 kubelet[1311]: E0717 01:50:03.519063    1311 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:50:03 no-preload-818382 kubelet[1311]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:50:03 no-preload-818382 kubelet[1311]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:50:03 no-preload-818382 kubelet[1311]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:50:03 no-preload-818382 kubelet[1311]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:50:04 no-preload-818382 kubelet[1311]: E0717 01:50:04.502088    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:50:19 no-preload-818382 kubelet[1311]: E0717 01:50:19.503419    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:50:30 no-preload-818382 kubelet[1311]: E0717 01:50:30.502907    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:50:45 no-preload-818382 kubelet[1311]: E0717 01:50:45.503931    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:51:00 no-preload-818382 kubelet[1311]: E0717 01:51:00.502261    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:51:03 no-preload-818382 kubelet[1311]: E0717 01:51:03.521062    1311 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:51:03 no-preload-818382 kubelet[1311]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:51:03 no-preload-818382 kubelet[1311]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:51:03 no-preload-818382 kubelet[1311]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:51:03 no-preload-818382 kubelet[1311]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:51:11 no-preload-818382 kubelet[1311]: E0717 01:51:11.502887    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:51:25 no-preload-818382 kubelet[1311]: E0717 01:51:25.505223    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	
	
	==> storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] <==
	I0717 01:38:09.072750       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:38:39.076796       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] <==
	I0717 01:38:39.821368       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:38:39.836320       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:38:39.836899       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:38:39.855369       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:38:39.855626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-818382_f90f8659-f815-4e1a-8695-25afb52db782!
	I0717 01:38:39.866232       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d37da931-3b24-4588-9d82-4654a10d779a", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-818382_f90f8659-f815-4e1a-8695-25afb52db782 became leader
	I0717 01:38:39.956760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-818382_f90f8659-f815-4e1a-8695-25afb52db782!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-818382 -n no-preload-818382
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-818382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-vgkwg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-818382 describe pod metrics-server-78fcd8795b-vgkwg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-818382 describe pod metrics-server-78fcd8795b-vgkwg: exit status 1 (75.934055ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-vgkwg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-818382 describe pod metrics-server-78fcd8795b-vgkwg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (436.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-484167 -n embed-certs-484167
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 01:49:53.50063639 +0000 UTC m=+6324.304782223
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-484167 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-484167 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.825µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-484167 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-484167 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-484167 logs -n 25: (1.195383011s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo docker                        | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo find                          | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo crio                          | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p kindnet-453036                                    | kindnet-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	| start   | -p calico-453036 --memory=3072                       | calico-453036  | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:49:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:49:08.772023   77834 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:49:08.772296   77834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:49:08.772306   77834 out.go:304] Setting ErrFile to fd 2...
	I0717 01:49:08.772312   77834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:49:08.772484   77834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:49:08.773105   77834 out.go:298] Setting JSON to false
	I0717 01:49:08.774192   77834 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9098,"bootTime":1721171851,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:49:08.774257   77834 start.go:139] virtualization: kvm guest
	I0717 01:49:08.776392   77834 out.go:177] * [calico-453036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:49:08.777876   77834 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:49:08.777892   77834 notify.go:220] Checking for updates...
	I0717 01:49:08.780650   77834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:49:08.782033   77834 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:49:08.783386   77834 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:49:08.784729   77834 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:49:08.785904   77834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:49:08.787794   77834 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:49:08.787943   77834 config.go:182] Loaded profile config "embed-certs-484167": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:49:08.788047   77834 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:49:08.788133   77834 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:49:08.825064   77834 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:49:08.826306   77834 start.go:297] selected driver: kvm2
	I0717 01:49:08.826332   77834 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:49:08.826345   77834 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:49:08.827278   77834 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:49:08.827376   77834 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:49:08.843686   77834 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:49:08.843725   77834 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:49:08.843963   77834 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:49:08.843999   77834 cni.go:84] Creating CNI manager for "calico"
	I0717 01:49:08.844008   77834 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0717 01:49:08.844062   77834 start.go:340] cluster config:
	{Name:calico-453036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:49:08.844176   77834 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:49:08.845977   77834 out.go:177] * Starting "calico-453036" primary control-plane node in "calico-453036" cluster
	I0717 01:49:08.847134   77834 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:49:08.847170   77834 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:49:08.847182   77834 cache.go:56] Caching tarball of preloaded images
	I0717 01:49:08.847255   77834 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:49:08.847267   77834 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:49:08.847367   77834 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/config.json ...
	I0717 01:49:08.847388   77834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/config.json: {Name:mkc8591636b367f7f58b32282a82a46ed80d4184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:08.847528   77834 start.go:360] acquireMachinesLock for calico-453036: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:49:08.847567   77834 start.go:364] duration metric: took 23.744µs to acquireMachinesLock for "calico-453036"
	I0717 01:49:08.847589   77834 start.go:93] Provisioning new machine with config: &{Name:calico-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:calico-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:49:08.847671   77834 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:49:08.849148   77834 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 01:49:08.849295   77834 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:49:08.849336   77834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:49:08.864710   77834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0717 01:49:08.865129   77834 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:49:08.865627   77834 main.go:141] libmachine: Using API Version  1
	I0717 01:49:08.865648   77834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:49:08.866013   77834 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:49:08.866180   77834 main.go:141] libmachine: (calico-453036) Calling .GetMachineName
	I0717 01:49:08.866325   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:49:08.866455   77834 start.go:159] libmachine.API.Create for "calico-453036" (driver="kvm2")
	I0717 01:49:08.866484   77834 client.go:168] LocalClient.Create starting
	I0717 01:49:08.866517   77834 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 01:49:08.866555   77834 main.go:141] libmachine: Decoding PEM data...
	I0717 01:49:08.866575   77834 main.go:141] libmachine: Parsing certificate...
	I0717 01:49:08.866651   77834 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 01:49:08.866677   77834 main.go:141] libmachine: Decoding PEM data...
	I0717 01:49:08.866691   77834 main.go:141] libmachine: Parsing certificate...
	I0717 01:49:08.866717   77834 main.go:141] libmachine: Running pre-create checks...
	I0717 01:49:08.866725   77834 main.go:141] libmachine: (calico-453036) Calling .PreCreateCheck
	I0717 01:49:08.867141   77834 main.go:141] libmachine: (calico-453036) Calling .GetConfigRaw
	I0717 01:49:08.867525   77834 main.go:141] libmachine: Creating machine...
	I0717 01:49:08.867542   77834 main.go:141] libmachine: (calico-453036) Calling .Create
	I0717 01:49:08.867664   77834 main.go:141] libmachine: (calico-453036) Creating KVM machine...
	I0717 01:49:08.868996   77834 main.go:141] libmachine: (calico-453036) DBG | found existing default KVM network
	I0717 01:49:08.870352   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:08.870205   77857 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:26:93} reservation:<nil>}
	I0717 01:49:08.871107   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:08.871023   77857 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:6c:f9} reservation:<nil>}
	I0717 01:49:08.872179   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:08.872115   77857 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5150}
	I0717 01:49:08.872201   77834 main.go:141] libmachine: (calico-453036) DBG | created network xml: 
	I0717 01:49:08.872207   77834 main.go:141] libmachine: (calico-453036) DBG | <network>
	I0717 01:49:08.872214   77834 main.go:141] libmachine: (calico-453036) DBG |   <name>mk-calico-453036</name>
	I0717 01:49:08.872219   77834 main.go:141] libmachine: (calico-453036) DBG |   <dns enable='no'/>
	I0717 01:49:08.872224   77834 main.go:141] libmachine: (calico-453036) DBG |   
	I0717 01:49:08.872229   77834 main.go:141] libmachine: (calico-453036) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0717 01:49:08.872238   77834 main.go:141] libmachine: (calico-453036) DBG |     <dhcp>
	I0717 01:49:08.872244   77834 main.go:141] libmachine: (calico-453036) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0717 01:49:08.872252   77834 main.go:141] libmachine: (calico-453036) DBG |     </dhcp>
	I0717 01:49:08.872256   77834 main.go:141] libmachine: (calico-453036) DBG |   </ip>
	I0717 01:49:08.872263   77834 main.go:141] libmachine: (calico-453036) DBG |   
	I0717 01:49:08.872276   77834 main.go:141] libmachine: (calico-453036) DBG | </network>
	I0717 01:49:08.872286   77834 main.go:141] libmachine: (calico-453036) DBG | 
	I0717 01:49:08.877729   77834 main.go:141] libmachine: (calico-453036) DBG | trying to create private KVM network mk-calico-453036 192.168.61.0/24...
	I0717 01:49:08.949644   77834 main.go:141] libmachine: (calico-453036) DBG | private KVM network mk-calico-453036 192.168.61.0/24 created
	I0717 01:49:08.949679   77834 main.go:141] libmachine: (calico-453036) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036 ...
	I0717 01:49:08.949716   77834 main.go:141] libmachine: (calico-453036) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 01:49:08.949729   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:08.949602   77857 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:49:08.949889   77834 main.go:141] libmachine: (calico-453036) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 01:49:09.178974   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:09.178853   77857 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa...
	I0717 01:49:09.281398   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:09.281297   77857 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/calico-453036.rawdisk...
	I0717 01:49:09.281432   77834 main.go:141] libmachine: (calico-453036) DBG | Writing magic tar header
	I0717 01:49:09.281443   77834 main.go:141] libmachine: (calico-453036) DBG | Writing SSH key tar header
	I0717 01:49:09.281507   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:09.281436   77857 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036 ...
	I0717 01:49:09.281609   77834 main.go:141] libmachine: (calico-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036
	I0717 01:49:09.281634   77834 main.go:141] libmachine: (calico-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036 (perms=drwx------)
	I0717 01:49:09.281643   77834 main.go:141] libmachine: (calico-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 01:49:09.281655   77834 main.go:141] libmachine: (calico-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:49:09.281662   77834 main.go:141] libmachine: (calico-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 01:49:09.281669   77834 main.go:141] libmachine: (calico-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:49:09.281689   77834 main.go:141] libmachine: (calico-453036) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:49:09.281702   77834 main.go:141] libmachine: (calico-453036) DBG | Checking permissions on dir: /home
	I0717 01:49:09.281720   77834 main.go:141] libmachine: (calico-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:49:09.281731   77834 main.go:141] libmachine: (calico-453036) DBG | Skipping /home - not owner
	I0717 01:49:09.281748   77834 main.go:141] libmachine: (calico-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 01:49:09.281761   77834 main.go:141] libmachine: (calico-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 01:49:09.281773   77834 main.go:141] libmachine: (calico-453036) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:49:09.281786   77834 main.go:141] libmachine: (calico-453036) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:49:09.281816   77834 main.go:141] libmachine: (calico-453036) Creating domain...
	I0717 01:49:09.282969   77834 main.go:141] libmachine: (calico-453036) define libvirt domain using xml: 
	I0717 01:49:09.282991   77834 main.go:141] libmachine: (calico-453036) <domain type='kvm'>
	I0717 01:49:09.283007   77834 main.go:141] libmachine: (calico-453036)   <name>calico-453036</name>
	I0717 01:49:09.283015   77834 main.go:141] libmachine: (calico-453036)   <memory unit='MiB'>3072</memory>
	I0717 01:49:09.283048   77834 main.go:141] libmachine: (calico-453036)   <vcpu>2</vcpu>
	I0717 01:49:09.283072   77834 main.go:141] libmachine: (calico-453036)   <features>
	I0717 01:49:09.283086   77834 main.go:141] libmachine: (calico-453036)     <acpi/>
	I0717 01:49:09.283111   77834 main.go:141] libmachine: (calico-453036)     <apic/>
	I0717 01:49:09.283125   77834 main.go:141] libmachine: (calico-453036)     <pae/>
	I0717 01:49:09.283135   77834 main.go:141] libmachine: (calico-453036)     
	I0717 01:49:09.283144   77834 main.go:141] libmachine: (calico-453036)   </features>
	I0717 01:49:09.283168   77834 main.go:141] libmachine: (calico-453036)   <cpu mode='host-passthrough'>
	I0717 01:49:09.283180   77834 main.go:141] libmachine: (calico-453036)   
	I0717 01:49:09.283186   77834 main.go:141] libmachine: (calico-453036)   </cpu>
	I0717 01:49:09.283193   77834 main.go:141] libmachine: (calico-453036)   <os>
	I0717 01:49:09.283201   77834 main.go:141] libmachine: (calico-453036)     <type>hvm</type>
	I0717 01:49:09.283207   77834 main.go:141] libmachine: (calico-453036)     <boot dev='cdrom'/>
	I0717 01:49:09.283213   77834 main.go:141] libmachine: (calico-453036)     <boot dev='hd'/>
	I0717 01:49:09.283217   77834 main.go:141] libmachine: (calico-453036)     <bootmenu enable='no'/>
	I0717 01:49:09.283223   77834 main.go:141] libmachine: (calico-453036)   </os>
	I0717 01:49:09.283233   77834 main.go:141] libmachine: (calico-453036)   <devices>
	I0717 01:49:09.283242   77834 main.go:141] libmachine: (calico-453036)     <disk type='file' device='cdrom'>
	I0717 01:49:09.283274   77834 main.go:141] libmachine: (calico-453036)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/boot2docker.iso'/>
	I0717 01:49:09.283287   77834 main.go:141] libmachine: (calico-453036)       <target dev='hdc' bus='scsi'/>
	I0717 01:49:09.283291   77834 main.go:141] libmachine: (calico-453036)       <readonly/>
	I0717 01:49:09.283301   77834 main.go:141] libmachine: (calico-453036)     </disk>
	I0717 01:49:09.283308   77834 main.go:141] libmachine: (calico-453036)     <disk type='file' device='disk'>
	I0717 01:49:09.283316   77834 main.go:141] libmachine: (calico-453036)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:49:09.283329   77834 main.go:141] libmachine: (calico-453036)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/calico-453036.rawdisk'/>
	I0717 01:49:09.283352   77834 main.go:141] libmachine: (calico-453036)       <target dev='hda' bus='virtio'/>
	I0717 01:49:09.283370   77834 main.go:141] libmachine: (calico-453036)     </disk>
	I0717 01:49:09.283381   77834 main.go:141] libmachine: (calico-453036)     <interface type='network'>
	I0717 01:49:09.283393   77834 main.go:141] libmachine: (calico-453036)       <source network='mk-calico-453036'/>
	I0717 01:49:09.283406   77834 main.go:141] libmachine: (calico-453036)       <model type='virtio'/>
	I0717 01:49:09.283415   77834 main.go:141] libmachine: (calico-453036)     </interface>
	I0717 01:49:09.283427   77834 main.go:141] libmachine: (calico-453036)     <interface type='network'>
	I0717 01:49:09.283437   77834 main.go:141] libmachine: (calico-453036)       <source network='default'/>
	I0717 01:49:09.283446   77834 main.go:141] libmachine: (calico-453036)       <model type='virtio'/>
	I0717 01:49:09.283457   77834 main.go:141] libmachine: (calico-453036)     </interface>
	I0717 01:49:09.283469   77834 main.go:141] libmachine: (calico-453036)     <serial type='pty'>
	I0717 01:49:09.283480   77834 main.go:141] libmachine: (calico-453036)       <target port='0'/>
	I0717 01:49:09.283492   77834 main.go:141] libmachine: (calico-453036)     </serial>
	I0717 01:49:09.283504   77834 main.go:141] libmachine: (calico-453036)     <console type='pty'>
	I0717 01:49:09.283517   77834 main.go:141] libmachine: (calico-453036)       <target type='serial' port='0'/>
	I0717 01:49:09.283527   77834 main.go:141] libmachine: (calico-453036)     </console>
	I0717 01:49:09.283536   77834 main.go:141] libmachine: (calico-453036)     <rng model='virtio'>
	I0717 01:49:09.283548   77834 main.go:141] libmachine: (calico-453036)       <backend model='random'>/dev/random</backend>
	I0717 01:49:09.283560   77834 main.go:141] libmachine: (calico-453036)     </rng>
	I0717 01:49:09.283574   77834 main.go:141] libmachine: (calico-453036)     
	I0717 01:49:09.283585   77834 main.go:141] libmachine: (calico-453036)     
	I0717 01:49:09.283594   77834 main.go:141] libmachine: (calico-453036)   </devices>
	I0717 01:49:09.283604   77834 main.go:141] libmachine: (calico-453036) </domain>
	I0717 01:49:09.283613   77834 main.go:141] libmachine: (calico-453036) 
	I0717 01:49:09.287575   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:a1:fc:93 in network default
	I0717 01:49:09.288204   77834 main.go:141] libmachine: (calico-453036) Ensuring networks are active...
	I0717 01:49:09.288223   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:09.288880   77834 main.go:141] libmachine: (calico-453036) Ensuring network default is active
	I0717 01:49:09.289243   77834 main.go:141] libmachine: (calico-453036) Ensuring network mk-calico-453036 is active
	I0717 01:49:09.289819   77834 main.go:141] libmachine: (calico-453036) Getting domain xml...
	I0717 01:49:09.290513   77834 main.go:141] libmachine: (calico-453036) Creating domain...
	I0717 01:49:10.537418   77834 main.go:141] libmachine: (calico-453036) Waiting to get IP...
	I0717 01:49:10.538181   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:10.538593   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:10.538635   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:10.538568   77857 retry.go:31] will retry after 197.510168ms: waiting for machine to come up
	I0717 01:49:10.738027   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:10.738558   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:10.738584   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:10.738516   77857 retry.go:31] will retry after 283.432664ms: waiting for machine to come up
	I0717 01:49:11.024053   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:11.024591   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:11.024617   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:11.024541   77857 retry.go:31] will retry after 400.707318ms: waiting for machine to come up
	I0717 01:49:11.427001   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:11.427517   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:11.427564   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:11.427473   77857 retry.go:31] will retry after 594.042715ms: waiting for machine to come up
	I0717 01:49:12.023150   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:12.023757   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:12.023785   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:12.023695   77857 retry.go:31] will retry after 623.143245ms: waiting for machine to come up
	I0717 01:49:12.648572   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:12.649085   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:12.649111   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:12.649048   77857 retry.go:31] will retry after 750.057016ms: waiting for machine to come up
	I0717 01:49:13.400451   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:13.400883   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:13.400911   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:13.400834   77857 retry.go:31] will retry after 1.00254104s: waiting for machine to come up
	I0717 01:49:14.405260   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:14.405696   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:14.405720   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:14.405675   77857 retry.go:31] will retry after 1.098768504s: waiting for machine to come up
	I0717 01:49:15.505592   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:15.506140   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:15.506168   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:15.506089   77857 retry.go:31] will retry after 1.744038502s: waiting for machine to come up
	I0717 01:49:17.251361   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:17.251926   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:17.251957   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:17.251861   77857 retry.go:31] will retry after 2.135744994s: waiting for machine to come up
	I0717 01:49:19.389772   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:19.390309   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:19.390334   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:19.390269   77857 retry.go:31] will retry after 2.04669951s: waiting for machine to come up
	I0717 01:49:21.438983   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:21.439498   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:21.439520   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:21.439460   77857 retry.go:31] will retry after 3.563564605s: waiting for machine to come up
	I0717 01:49:25.004519   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:25.005042   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find current IP address of domain calico-453036 in network mk-calico-453036
	I0717 01:49:25.005061   77834 main.go:141] libmachine: (calico-453036) DBG | I0717 01:49:25.005008   77857 retry.go:31] will retry after 4.423238962s: waiting for machine to come up
	I0717 01:49:29.429487   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.429969   77834 main.go:141] libmachine: (calico-453036) Found IP for machine: 192.168.61.27
	I0717 01:49:29.429997   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has current primary IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.430003   77834 main.go:141] libmachine: (calico-453036) Reserving static IP address...
	I0717 01:49:29.430403   77834 main.go:141] libmachine: (calico-453036) DBG | unable to find host DHCP lease matching {name: "calico-453036", mac: "52:54:00:95:51:6b", ip: "192.168.61.27"} in network mk-calico-453036
	I0717 01:49:29.505224   77834 main.go:141] libmachine: (calico-453036) Reserved static IP address: 192.168.61.27
	I0717 01:49:29.505249   77834 main.go:141] libmachine: (calico-453036) Waiting for SSH to be available...
	I0717 01:49:29.505257   77834 main.go:141] libmachine: (calico-453036) DBG | Getting to WaitForSSH function...
	I0717 01:49:29.507754   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.508203   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:29.508242   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.508334   77834 main.go:141] libmachine: (calico-453036) DBG | Using SSH client type: external
	I0717 01:49:29.508354   77834 main.go:141] libmachine: (calico-453036) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa (-rw-------)
	I0717 01:49:29.508380   77834 main.go:141] libmachine: (calico-453036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:49:29.508400   77834 main.go:141] libmachine: (calico-453036) DBG | About to run SSH command:
	I0717 01:49:29.508412   77834 main.go:141] libmachine: (calico-453036) DBG | exit 0
	I0717 01:49:29.640926   77834 main.go:141] libmachine: (calico-453036) DBG | SSH cmd err, output: <nil>: 
	I0717 01:49:29.641205   77834 main.go:141] libmachine: (calico-453036) KVM machine creation complete!
	I0717 01:49:29.641540   77834 main.go:141] libmachine: (calico-453036) Calling .GetConfigRaw
	I0717 01:49:29.642044   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:49:29.642243   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:49:29.642413   77834 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:49:29.642428   77834 main.go:141] libmachine: (calico-453036) Calling .GetState
	I0717 01:49:29.643626   77834 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:49:29.643639   77834 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:49:29.643644   77834 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:49:29.643650   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:29.646145   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.646600   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:29.646628   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.646765   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:29.647079   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:29.647240   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:29.647410   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:29.647627   77834 main.go:141] libmachine: Using SSH client type: native
	I0717 01:49:29.647822   77834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.27 22 <nil> <nil>}
	I0717 01:49:29.647837   77834 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:49:29.759844   77834 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:49:29.759866   77834 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:49:29.759876   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:29.762780   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.763241   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:29.763267   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.763457   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:29.763654   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:29.763843   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:29.764028   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:29.764213   77834 main.go:141] libmachine: Using SSH client type: native
	I0717 01:49:29.764416   77834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.27 22 <nil> <nil>}
	I0717 01:49:29.764431   77834 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:49:29.877078   77834 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:49:29.877167   77834 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:49:29.877186   77834 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:49:29.877205   77834 main.go:141] libmachine: (calico-453036) Calling .GetMachineName
	I0717 01:49:29.877435   77834 buildroot.go:166] provisioning hostname "calico-453036"
	I0717 01:49:29.877455   77834 main.go:141] libmachine: (calico-453036) Calling .GetMachineName
	I0717 01:49:29.877652   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:29.880248   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.880643   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:29.880671   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:29.880874   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:29.881040   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:29.881196   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:29.881334   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:29.881469   77834 main.go:141] libmachine: Using SSH client type: native
	I0717 01:49:29.881640   77834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.27 22 <nil> <nil>}
	I0717 01:49:29.881665   77834 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-453036 && echo "calico-453036" | sudo tee /etc/hostname
	I0717 01:49:30.015141   77834 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-453036
	
	I0717 01:49:30.015173   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:30.018117   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.018477   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:30.018503   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.018723   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:30.018941   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:30.019146   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:30.019327   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:30.019529   77834 main.go:141] libmachine: Using SSH client type: native
	I0717 01:49:30.019782   77834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.27 22 <nil> <nil>}
	I0717 01:49:30.019809   77834 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-453036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-453036/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-453036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:49:30.146958   77834 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:49:30.146994   77834 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:49:30.147032   77834 buildroot.go:174] setting up certificates
	I0717 01:49:30.147040   77834 provision.go:84] configureAuth start
	I0717 01:49:30.147049   77834 main.go:141] libmachine: (calico-453036) Calling .GetMachineName
	I0717 01:49:30.147358   77834 main.go:141] libmachine: (calico-453036) Calling .GetIP
	I0717 01:49:30.150132   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.150494   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:30.150512   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.150705   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:30.152928   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.153271   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:30.153298   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.153419   77834 provision.go:143] copyHostCerts
	I0717 01:49:30.153502   77834 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:49:30.153520   77834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:49:30.153687   77834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:49:30.153836   77834 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:49:30.153860   77834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:49:30.153905   77834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:49:30.153995   77834 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:49:30.154005   77834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:49:30.154036   77834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:49:30.154107   77834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.calico-453036 san=[127.0.0.1 192.168.61.27 calico-453036 localhost minikube]
	I0717 01:49:30.483407   77834 provision.go:177] copyRemoteCerts
	I0717 01:49:30.483463   77834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:49:30.483485   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:30.486274   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.486742   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:30.486793   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.486928   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:30.487130   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:30.487303   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:30.487447   77834 sshutil.go:53] new ssh client: &{IP:192.168.61.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa Username:docker}
	I0717 01:49:30.579388   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:49:30.604484   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 01:49:30.628129   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:49:30.652120   77834 provision.go:87] duration metric: took 505.067971ms to configureAuth
	I0717 01:49:30.652152   77834 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:49:30.652361   77834 config.go:182] Loaded profile config "calico-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:49:30.652442   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:30.654920   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.655297   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:30.655321   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.655491   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:30.655687   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:30.655837   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:30.655963   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:30.656103   77834 main.go:141] libmachine: Using SSH client type: native
	I0717 01:49:30.656288   77834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.27 22 <nil> <nil>}
	I0717 01:49:30.656307   77834 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:49:30.938918   77834 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:49:30.938947   77834 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:49:30.938958   77834 main.go:141] libmachine: (calico-453036) Calling .GetURL
	I0717 01:49:30.940349   77834 main.go:141] libmachine: (calico-453036) DBG | Using libvirt version 6000000
	I0717 01:49:30.942876   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.943232   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:30.943252   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.943435   77834 main.go:141] libmachine: Docker is up and running!
	I0717 01:49:30.943453   77834 main.go:141] libmachine: Reticulating splines...
	I0717 01:49:30.943460   77834 client.go:171] duration metric: took 22.076966049s to LocalClient.Create
	I0717 01:49:30.943484   77834 start.go:167] duration metric: took 22.077028579s to libmachine.API.Create "calico-453036"
	I0717 01:49:30.943494   77834 start.go:293] postStartSetup for "calico-453036" (driver="kvm2")
	I0717 01:49:30.943507   77834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:49:30.943527   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:49:30.943756   77834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:49:30.943815   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:30.945881   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.946289   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:30.946314   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:30.946446   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:30.946658   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:30.946938   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:30.947097   77834 sshutil.go:53] new ssh client: &{IP:192.168.61.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa Username:docker}
	I0717 01:49:31.035481   77834 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:49:31.039779   77834 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:49:31.039807   77834 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:49:31.039878   77834 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:49:31.039968   77834 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:49:31.040085   77834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:49:31.049661   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:49:31.074977   77834 start.go:296] duration metric: took 131.469081ms for postStartSetup
	I0717 01:49:31.075040   77834 main.go:141] libmachine: (calico-453036) Calling .GetConfigRaw
	I0717 01:49:31.075648   77834 main.go:141] libmachine: (calico-453036) Calling .GetIP
	I0717 01:49:31.078677   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.079099   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:31.079128   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.079403   77834 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/config.json ...
	I0717 01:49:31.079588   77834 start.go:128] duration metric: took 22.231907251s to createHost
	I0717 01:49:31.079611   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:31.082030   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.082334   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:31.082359   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.082597   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:31.082787   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:31.082936   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:31.083087   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:31.083270   77834 main.go:141] libmachine: Using SSH client type: native
	I0717 01:49:31.083440   77834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.27 22 <nil> <nil>}
	I0717 01:49:31.083455   77834 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:49:31.197255   77834 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180971.170129779
	
	I0717 01:49:31.197279   77834 fix.go:216] guest clock: 1721180971.170129779
	I0717 01:49:31.197291   77834 fix.go:229] Guest: 2024-07-17 01:49:31.170129779 +0000 UTC Remote: 2024-07-17 01:49:31.079599158 +0000 UTC m=+22.342146690 (delta=90.530621ms)
	I0717 01:49:31.197322   77834 fix.go:200] guest clock delta is within tolerance: 90.530621ms
	I0717 01:49:31.197333   77834 start.go:83] releasing machines lock for "calico-453036", held for 22.349755772s
	I0717 01:49:31.197359   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:49:31.197620   77834 main.go:141] libmachine: (calico-453036) Calling .GetIP
	I0717 01:49:31.200208   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.200581   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:31.200612   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.200767   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:49:31.201204   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:49:31.201355   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:49:31.201430   77834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:49:31.201479   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:31.201578   77834 ssh_runner.go:195] Run: cat /version.json
	I0717 01:49:31.201599   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:49:31.204173   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.204452   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.204618   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:31.204689   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.204740   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:31.204879   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:31.204893   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:31.204900   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:31.205089   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:49:31.205096   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:31.205249   77834 sshutil.go:53] new ssh client: &{IP:192.168.61.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa Username:docker}
	I0717 01:49:31.205273   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:49:31.205435   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:49:31.205585   77834 sshutil.go:53] new ssh client: &{IP:192.168.61.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa Username:docker}
	I0717 01:49:31.314304   77834 ssh_runner.go:195] Run: systemctl --version
	I0717 01:49:31.320413   77834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:49:31.482323   77834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:49:31.488854   77834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:49:31.488932   77834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:49:31.508426   77834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:49:31.508452   77834 start.go:495] detecting cgroup driver to use...
	I0717 01:49:31.508523   77834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:49:31.528037   77834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:49:31.543703   77834 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:49:31.543763   77834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:49:31.558449   77834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:49:31.574015   77834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:49:31.695505   77834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:49:31.845468   77834 docker.go:233] disabling docker service ...
	I0717 01:49:31.845561   77834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:49:31.860795   77834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:49:31.874176   77834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:49:32.023352   77834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:49:32.145287   77834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:49:32.161283   77834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:49:32.181041   77834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:49:32.181113   77834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:49:32.192120   77834 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:49:32.192196   77834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:49:32.203458   77834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:49:32.214547   77834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:49:32.224884   77834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:49:32.235738   77834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:49:32.247272   77834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:49:32.264706   77834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:49:32.275683   77834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:49:32.285596   77834 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:49:32.285648   77834 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:49:32.299144   77834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:49:32.308485   77834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:49:32.426678   77834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:49:32.571560   77834 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:49:32.571639   77834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:49:32.576806   77834 start.go:563] Will wait 60s for crictl version
	I0717 01:49:32.576843   77834 ssh_runner.go:195] Run: which crictl
	I0717 01:49:32.580570   77834 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:49:32.621066   77834 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:49:32.621167   77834 ssh_runner.go:195] Run: crio --version
	I0717 01:49:32.653001   77834 ssh_runner.go:195] Run: crio --version
	I0717 01:49:32.685355   77834 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:49:32.686624   77834 main.go:141] libmachine: (calico-453036) Calling .GetIP
	I0717 01:49:32.689166   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:32.689662   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:49:32.689692   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:49:32.689893   77834 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:49:32.694322   77834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:49:32.707407   77834 kubeadm.go:883] updating cluster {Name:calico-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:calico-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:49:32.707525   77834 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:49:32.707584   77834 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:49:32.740327   77834 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:49:32.740404   77834 ssh_runner.go:195] Run: which lz4
	I0717 01:49:32.744735   77834 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:49:32.749159   77834 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:49:32.749195   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:49:34.200755   77834 crio.go:462] duration metric: took 1.456062273s to copy over tarball
	I0717 01:49:34.200825   77834 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:49:36.604178   77834 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.403328403s)
	I0717 01:49:36.604209   77834 crio.go:469] duration metric: took 2.403429036s to extract the tarball
	I0717 01:49:36.604234   77834 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:49:36.641794   77834 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:49:36.683402   77834 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:49:36.683428   77834 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:49:36.683436   77834 kubeadm.go:934] updating node { 192.168.61.27 8443 v1.30.2 crio true true} ...
	I0717 01:49:36.683561   77834 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-453036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:calico-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0717 01:49:36.683643   77834 ssh_runner.go:195] Run: crio config
	I0717 01:49:36.732402   77834 cni.go:84] Creating CNI manager for "calico"
	I0717 01:49:36.732433   77834 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:49:36.732460   77834 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.27 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-453036 NodeName:calico-453036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:49:36.732639   77834 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.27
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-453036"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:49:36.732716   77834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:49:36.743985   77834 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:49:36.744043   77834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:49:36.753962   77834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 01:49:36.771557   77834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:49:36.788175   77834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0717 01:49:36.805260   77834 ssh_runner.go:195] Run: grep 192.168.61.27	control-plane.minikube.internal$ /etc/hosts
	I0717 01:49:36.809343   77834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:49:36.822481   77834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:49:36.952079   77834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:49:36.969523   77834 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036 for IP: 192.168.61.27
	I0717 01:49:36.969544   77834 certs.go:194] generating shared ca certs ...
	I0717 01:49:36.969558   77834 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:36.969719   77834 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:49:36.969770   77834 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:49:36.969780   77834 certs.go:256] generating profile certs ...
	I0717 01:49:36.969832   77834 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.key
	I0717 01:49:36.969850   77834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt with IP's: []
	I0717 01:49:37.178431   77834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt ...
	I0717 01:49:37.178456   77834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: {Name:mke8bd0e15865f40b1529c11f964816219baa4da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:37.178611   77834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.key ...
	I0717 01:49:37.178621   77834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.key: {Name:mk4cc8d91e9456441ccb562df79164bd0fa12c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:37.178691   77834 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.key.8a7a2deb
	I0717 01:49:37.178705   77834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.crt.8a7a2deb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.27]
	I0717 01:49:37.242119   77834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.crt.8a7a2deb ...
	I0717 01:49:37.242149   77834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.crt.8a7a2deb: {Name:mkede54dabe354df3f853c57d442a4c37560a73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:37.242341   77834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.key.8a7a2deb ...
	I0717 01:49:37.242364   77834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.key.8a7a2deb: {Name:mkb94abf018c54f1bae6e12b592b8044c899e36d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:37.242471   77834 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.crt.8a7a2deb -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.crt
	I0717 01:49:37.242541   77834 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.key.8a7a2deb -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.key
	I0717 01:49:37.242589   77834 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/proxy-client.key
	I0717 01:49:37.242602   77834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/proxy-client.crt with IP's: []
	I0717 01:49:37.364519   77834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/proxy-client.crt ...
	I0717 01:49:37.364573   77834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/proxy-client.crt: {Name:mk2cec9b2fa7f7af9823344e4b8dafcedd55c301 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:37.364775   77834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/proxy-client.key ...
	I0717 01:49:37.364795   77834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/proxy-client.key: {Name:mk26c4c00eaee6edb43ffcde83eb391e9d93885e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:37.365053   77834 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:49:37.365103   77834 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:49:37.365120   77834 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:49:37.365149   77834 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:49:37.365177   77834 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:49:37.365219   77834 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:49:37.365274   77834 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:49:37.365985   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:49:37.392482   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:49:37.419997   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:49:37.446380   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:49:37.475453   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 01:49:37.505430   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:49:37.534120   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:49:37.561519   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:49:37.591285   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:49:37.622893   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:49:37.655049   77834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:49:37.692644   77834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:49:37.712644   77834 ssh_runner.go:195] Run: openssl version
	I0717 01:49:37.720896   77834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:49:37.732490   77834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:49:37.737521   77834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:49:37.737590   77834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:49:37.743610   77834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:49:37.754595   77834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:49:37.767439   77834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:49:37.772329   77834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:49:37.772388   77834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:49:37.778734   77834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:49:37.790806   77834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:49:37.803208   77834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:49:37.809417   77834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:49:37.809482   77834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:49:37.815978   77834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:49:37.827394   77834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:49:37.832194   77834 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:49:37.832251   77834 kubeadm.go:392] StartCluster: {Name:calico-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:calico-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:49:37.832331   77834 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:49:37.832384   77834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:49:37.871083   77834 cri.go:89] found id: ""
	I0717 01:49:37.871159   77834 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:49:37.881585   77834 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:49:37.893166   77834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:49:37.903022   77834 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:49:37.903044   77834 kubeadm.go:157] found existing configuration files:
	
	I0717 01:49:37.903093   77834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:49:37.912746   77834 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:49:37.912807   77834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:49:37.922743   77834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:49:37.932070   77834 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:49:37.932127   77834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:49:37.941938   77834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:49:37.950993   77834 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:49:37.951067   77834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:49:37.960605   77834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:49:37.970088   77834 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:49:37.970154   77834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:49:37.979741   77834 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:49:38.039813   77834 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:49:38.039884   77834 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:49:38.182747   77834 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:49:38.182895   77834 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:49:38.183037   77834 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:49:38.401273   77834 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:49:38.434005   77834 out.go:204]   - Generating certificates and keys ...
	I0717 01:49:38.434140   77834 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:49:38.434239   77834 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:49:38.665250   77834 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:49:39.100463   77834 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:49:39.616012   77834 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:49:39.740288   77834 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:49:40.061011   77834 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:49:40.061136   77834 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-453036 localhost] and IPs [192.168.61.27 127.0.0.1 ::1]
	I0717 01:49:40.209775   77834 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:49:40.209918   77834 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-453036 localhost] and IPs [192.168.61.27 127.0.0.1 ::1]
	I0717 01:49:40.384232   77834 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:49:40.544526   77834 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:49:40.754581   77834 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:49:40.754830   77834 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:49:41.244077   77834 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:49:41.455598   77834 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:49:41.596884   77834 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:49:41.751613   77834 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:49:42.008069   77834 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:49:42.008740   77834 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:49:42.011615   77834 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:49:42.021034   77834 out.go:204]   - Booting up control plane ...
	I0717 01:49:42.021182   77834 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:49:42.021313   77834 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:49:42.021440   77834 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:49:42.033810   77834 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:49:42.034796   77834 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:49:42.034914   77834 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:49:42.184601   77834 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:49:42.184680   77834 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:49:43.185541   77834 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001658903s
	I0717 01:49:43.185641   77834 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:49:48.184696   77834 kubeadm.go:310] [api-check] The API server is healthy after 5.001733501s
	I0717 01:49:48.201812   77834 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 01:49:48.213895   77834 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 01:49:48.243786   77834 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 01:49:48.244071   77834 kubeadm.go:310] [mark-control-plane] Marking the node calico-453036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 01:49:48.257077   77834 kubeadm.go:310] [bootstrap-token] Using token: yq5dos.t1npgu0l2mbxs18j
	I0717 01:49:48.259420   77834 out.go:204]   - Configuring RBAC rules ...
	I0717 01:49:48.259555   77834 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 01:49:48.269902   77834 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 01:49:48.277646   77834 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 01:49:48.281235   77834 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 01:49:48.285236   77834 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 01:49:48.290266   77834 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 01:49:48.591613   77834 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 01:49:49.025295   77834 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 01:49:49.591950   77834 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 01:49:49.592973   77834 kubeadm.go:310] 
	I0717 01:49:49.593069   77834 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 01:49:49.593079   77834 kubeadm.go:310] 
	I0717 01:49:49.593200   77834 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 01:49:49.593210   77834 kubeadm.go:310] 
	I0717 01:49:49.593260   77834 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 01:49:49.593344   77834 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 01:49:49.593426   77834 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 01:49:49.593435   77834 kubeadm.go:310] 
	I0717 01:49:49.593511   77834 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 01:49:49.593520   77834 kubeadm.go:310] 
	I0717 01:49:49.593590   77834 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 01:49:49.593599   77834 kubeadm.go:310] 
	I0717 01:49:49.593675   77834 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 01:49:49.593788   77834 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 01:49:49.593865   77834 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 01:49:49.593873   77834 kubeadm.go:310] 
	I0717 01:49:49.593949   77834 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 01:49:49.594017   77834 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 01:49:49.594041   77834 kubeadm.go:310] 
	I0717 01:49:49.594112   77834 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yq5dos.t1npgu0l2mbxs18j \
	I0717 01:49:49.594253   77834 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 01:49:49.594282   77834 kubeadm.go:310] 	--control-plane 
	I0717 01:49:49.594299   77834 kubeadm.go:310] 
	I0717 01:49:49.594412   77834 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 01:49:49.594422   77834 kubeadm.go:310] 
	I0717 01:49:49.594503   77834 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yq5dos.t1npgu0l2mbxs18j \
	I0717 01:49:49.594591   77834 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 01:49:49.595074   77834 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:49:49.595271   77834 cni.go:84] Creating CNI manager for "calico"
	I0717 01:49:49.597722   77834 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0717 01:49:49.599439   77834 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 01:49:49.599457   77834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253815 bytes)
	I0717 01:49:49.621540   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 01:49:50.947840   77834 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.326265186s)
	I0717 01:49:50.947883   77834 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:49:50.948002   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:50.948001   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-453036 minikube.k8s.io/updated_at=2024_07_17T01_49_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=calico-453036 minikube.k8s.io/primary=true
	I0717 01:49:50.970526   77834 ops.go:34] apiserver oom_adj: -16
	I0717 01:49:51.084138   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:51.584345   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:52.084967   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:52.584915   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:53.084268   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:53.584458   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.038528308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180994038503666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d07df2d-bef4-4971-b1e6-792d82b05629 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.038900419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4d5030b-60cf-4616-be3d-d84202e2d209 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.038981010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4d5030b-60cf-4616-be3d-d84202e2d209 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.039176455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179781156710506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d993bb9350f7bfc32762f91918a1cb985ed555ea57afdb3efe52e40c1f37803,PodSandboxId:580c1f98b322514e8dc6af4b464a4e9712a0cef358428b2067f3f95b2a4f8f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721179759162259370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9c5cb46-8df1-450a-9ca7-a686651c1835,},Annotations:map[string]string{io.kubernetes.container.hash: 21f4c01a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187,PodSandboxId:cac67b7d41ea1385a1e0eca5710372b6fd990ff55283adb3fcd616be564f0dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179757918652809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z4qpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43aa103c-9e70-4fb1-8607-321b6904a218,},Annotations:map[string]string{io.kubernetes.container.hash: ed0dfeb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179750371786140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364,PodSandboxId:06e63e0ee89343e4f704f40b041c99eba9560210004538fbeedf4d9f5e899af2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179750367476881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gq7qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9a0ae4-28e0-4900-a39b-f7a0eba7c
c06,},Annotations:map[string]string{io.kubernetes.container.hash: 313309da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c,PodSandboxId:33e11f7db5878fd01048d61d2099a8becdfebc5897f3800ca3f074588f863c13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179745612950992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bec379c140db7a
0ad7e87dd7d54513da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026,PodSandboxId:f61e87c7b0eade411dc2d12c48d596b2b233980e47721e338454c6c50c5cdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179745635815659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca69dd5666621348366299d511
a00935,},Annotations:map[string]string{io.kubernetes.container.hash: 17c2edea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802,PodSandboxId:0d62d3963c8101b674dd20a45d0bb0b34e4a21d3ff09d70b05121745617a8ee9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179745639586318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec50a383234f49917f3a24369567b00,},Ann
otations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c,PodSandboxId:d11db21897316076a25a10d3cfc9c882b128a44c0a1d0ced43e8092e0755fb31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179745613603556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e013499247e47bae51c51faca75cfb,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 638512c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4d5030b-60cf-4616-be3d-d84202e2d209 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.079769530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c81d5e22-3699-4504-a94d-14a2600aff56 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.079856590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c81d5e22-3699-4504-a94d-14a2600aff56 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.081048420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91c12eb5-0de2-46de-8deb-4624b859b41a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.081521674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180994081498824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91c12eb5-0de2-46de-8deb-4624b859b41a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.081965797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e606bbf-99ef-48dd-8b67-fb683d4b88af name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.082031354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e606bbf-99ef-48dd-8b67-fb683d4b88af name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.082226358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179781156710506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d993bb9350f7bfc32762f91918a1cb985ed555ea57afdb3efe52e40c1f37803,PodSandboxId:580c1f98b322514e8dc6af4b464a4e9712a0cef358428b2067f3f95b2a4f8f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721179759162259370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9c5cb46-8df1-450a-9ca7-a686651c1835,},Annotations:map[string]string{io.kubernetes.container.hash: 21f4c01a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187,PodSandboxId:cac67b7d41ea1385a1e0eca5710372b6fd990ff55283adb3fcd616be564f0dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179757918652809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z4qpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43aa103c-9e70-4fb1-8607-321b6904a218,},Annotations:map[string]string{io.kubernetes.container.hash: ed0dfeb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179750371786140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364,PodSandboxId:06e63e0ee89343e4f704f40b041c99eba9560210004538fbeedf4d9f5e899af2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179750367476881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gq7qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9a0ae4-28e0-4900-a39b-f7a0eba7c
c06,},Annotations:map[string]string{io.kubernetes.container.hash: 313309da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c,PodSandboxId:33e11f7db5878fd01048d61d2099a8becdfebc5897f3800ca3f074588f863c13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179745612950992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bec379c140db7a
0ad7e87dd7d54513da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026,PodSandboxId:f61e87c7b0eade411dc2d12c48d596b2b233980e47721e338454c6c50c5cdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179745635815659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca69dd5666621348366299d511
a00935,},Annotations:map[string]string{io.kubernetes.container.hash: 17c2edea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802,PodSandboxId:0d62d3963c8101b674dd20a45d0bb0b34e4a21d3ff09d70b05121745617a8ee9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179745639586318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec50a383234f49917f3a24369567b00,},Ann
otations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c,PodSandboxId:d11db21897316076a25a10d3cfc9c882b128a44c0a1d0ced43e8092e0755fb31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179745613603556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e013499247e47bae51c51faca75cfb,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 638512c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e606bbf-99ef-48dd-8b67-fb683d4b88af name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.122326569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f87d9686-afb0-4335-ac6c-0b63fa77bce5 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.122471153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f87d9686-afb0-4335-ac6c-0b63fa77bce5 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.123615285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37ee02f1-9b08-44fd-bb99-e097b9ec5f2d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.124013810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180994123990649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37ee02f1-9b08-44fd-bb99-e097b9ec5f2d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.125986666Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0049b077-7db8-4690-9ece-f7e531cb59b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.126061313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0049b077-7db8-4690-9ece-f7e531cb59b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.126262835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179781156710506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d993bb9350f7bfc32762f91918a1cb985ed555ea57afdb3efe52e40c1f37803,PodSandboxId:580c1f98b322514e8dc6af4b464a4e9712a0cef358428b2067f3f95b2a4f8f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721179759162259370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9c5cb46-8df1-450a-9ca7-a686651c1835,},Annotations:map[string]string{io.kubernetes.container.hash: 21f4c01a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187,PodSandboxId:cac67b7d41ea1385a1e0eca5710372b6fd990ff55283adb3fcd616be564f0dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179757918652809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z4qpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43aa103c-9e70-4fb1-8607-321b6904a218,},Annotations:map[string]string{io.kubernetes.container.hash: ed0dfeb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179750371786140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364,PodSandboxId:06e63e0ee89343e4f704f40b041c99eba9560210004538fbeedf4d9f5e899af2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179750367476881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gq7qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9a0ae4-28e0-4900-a39b-f7a0eba7c
c06,},Annotations:map[string]string{io.kubernetes.container.hash: 313309da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c,PodSandboxId:33e11f7db5878fd01048d61d2099a8becdfebc5897f3800ca3f074588f863c13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179745612950992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bec379c140db7a
0ad7e87dd7d54513da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026,PodSandboxId:f61e87c7b0eade411dc2d12c48d596b2b233980e47721e338454c6c50c5cdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179745635815659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca69dd5666621348366299d511
a00935,},Annotations:map[string]string{io.kubernetes.container.hash: 17c2edea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802,PodSandboxId:0d62d3963c8101b674dd20a45d0bb0b34e4a21d3ff09d70b05121745617a8ee9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179745639586318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec50a383234f49917f3a24369567b00,},Ann
otations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c,PodSandboxId:d11db21897316076a25a10d3cfc9c882b128a44c0a1d0ced43e8092e0755fb31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179745613603556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e013499247e47bae51c51faca75cfb,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 638512c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0049b077-7db8-4690-9ece-f7e531cb59b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.162872040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7171e13-4577-4d35-ad6d-a329abad5022 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.162945448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7171e13-4577-4d35-ad6d-a329abad5022 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.164441052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b961eb12-a3a4-4711-8059-52c711e16070 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.164956637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180994164934619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b961eb12-a3a4-4711-8059-52c711e16070 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.165644372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d23b7f5d-125a-4237-bc16-fcf4a205ae0b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.165699150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d23b7f5d-125a-4237-bc16-fcf4a205ae0b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:49:54 embed-certs-484167 crio[722]: time="2024-07-17 01:49:54.166593871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179781156710506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d993bb9350f7bfc32762f91918a1cb985ed555ea57afdb3efe52e40c1f37803,PodSandboxId:580c1f98b322514e8dc6af4b464a4e9712a0cef358428b2067f3f95b2a4f8f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721179759162259370,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9c5cb46-8df1-450a-9ca7-a686651c1835,},Annotations:map[string]string{io.kubernetes.container.hash: 21f4c01a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187,PodSandboxId:cac67b7d41ea1385a1e0eca5710372b6fd990ff55283adb3fcd616be564f0dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179757918652809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z4qpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43aa103c-9e70-4fb1-8607-321b6904a218,},Annotations:map[string]string{io.kubernetes.container.hash: ed0dfeb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272,PodSandboxId:2826492fd74f07a1dc229c66df64871ca1cd4ea47039ae6589238f1e340aba3e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721179750371786140,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
620df9ee-45a9-4b04-a21c-0ddc878375ca,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6999b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364,PodSandboxId:06e63e0ee89343e4f704f40b041c99eba9560210004538fbeedf4d9f5e899af2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179750367476881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gq7qg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9a0ae4-28e0-4900-a39b-f7a0eba7c
c06,},Annotations:map[string]string{io.kubernetes.container.hash: 313309da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c,PodSandboxId:33e11f7db5878fd01048d61d2099a8becdfebc5897f3800ca3f074588f863c13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179745612950992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bec379c140db7a
0ad7e87dd7d54513da,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026,PodSandboxId:f61e87c7b0eade411dc2d12c48d596b2b233980e47721e338454c6c50c5cdbbc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179745635815659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca69dd5666621348366299d511
a00935,},Annotations:map[string]string{io.kubernetes.container.hash: 17c2edea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802,PodSandboxId:0d62d3963c8101b674dd20a45d0bb0b34e4a21d3ff09d70b05121745617a8ee9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179745639586318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec50a383234f49917f3a24369567b00,},Ann
otations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c,PodSandboxId:d11db21897316076a25a10d3cfc9c882b128a44c0a1d0ced43e8092e0755fb31,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179745613603556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-484167,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81e013499247e47bae51c51faca75cfb,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 638512c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d23b7f5d-125a-4237-bc16-fcf4a205ae0b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a425272031e79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   2826492fd74f0       storage-provisioner
	7d993bb9350f7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   580c1f98b3225       busybox
	370fe40274893       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   cac67b7d41ea1       coredns-7db6d8ff4d-z4qpz
	dc597519e45ca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   2826492fd74f0       storage-provisioner
	2bad298334c16       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      20 minutes ago      Running             kube-proxy                1                   06e63e0ee8934       kube-proxy-gq7qg
	98433f2cdcf43       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      20 minutes ago      Running             kube-scheduler            1                   0d62d3963c810       kube-scheduler-embed-certs-484167
	d8d11986de466       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      20 minutes ago      Running             kube-apiserver            1                   f61e87c7b0ead       kube-apiserver-embed-certs-484167
	980691b126eee       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   d11db21897316       etcd-embed-certs-484167
	b9c4b4f6e05b2       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      20 minutes ago      Running             kube-controller-manager   1                   33e11f7db5878       kube-controller-manager-embed-certs-484167
	
	
	==> coredns [370fe402748938c44f5d0326d5f11b9e48f2a2d5ea60139a898a443820c8f187] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40336 - 4277 "HINFO IN 9002073944448212575.8652882617969124480. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009951171s
	
	
	==> describe nodes <==
	Name:               embed-certs-484167
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-484167
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=embed-certs-484167
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_20_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:20:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-484167
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:49:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:44:57 +0000   Wed, 17 Jul 2024 01:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:44:57 +0000   Wed, 17 Jul 2024 01:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:44:57 +0000   Wed, 17 Jul 2024 01:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:44:57 +0000   Wed, 17 Jul 2024 01:29:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.48
	  Hostname:    embed-certs-484167
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64980e167f3d439991be2dff0b86f1ea
	  System UUID:                64980e16-7f3d-4399-91be-2dff0b86f1ea
	  Boot ID:                    b27debbd-3d14-429b-91ca-a1c60ef2f995
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-7db6d8ff4d-z4qpz                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-484167                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-484167             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-484167    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-gq7qg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-484167             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-2qwf6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-484167 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-484167 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-484167 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-484167 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-484167 event: Registered Node embed-certs-484167 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-484167 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-484167 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-484167 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-484167 event: Registered Node embed-certs-484167 in Controller
	
	
	==> dmesg <==
	[Jul17 01:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051101] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.036514] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.258383] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628840] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.559161] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.065219] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060736] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.180674] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.122123] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.305976] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Jul17 01:29] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.069281] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.811311] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +5.618976] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.060211] systemd-fstab-generator[1535]: Ignoring "noauto" option for root device
	[  +1.632889] kauditd_printk_skb: 62 callbacks suppressed
	[  +8.100583] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [980691b126eeea60de54a651cb77a677a7f13355b3bf4cc9b046735d26ab018c] <==
	{"level":"info","ts":"2024-07-17T01:29:07.816516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:29:07.816465Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"36b30da979eae81e","local-member-attributes":"{Name:embed-certs-484167 ClientURLs:[https://192.168.72.48:2379]}","request-path":"/0/members/36b30da979eae81e/attributes","cluster-id":"a85db1df86d6d05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:29:07.817698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:29:07.818071Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:29:07.818129Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:29:07.819269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.48:2379"}
	{"level":"info","ts":"2024-07-17T01:29:07.82084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-17T01:29:27.563679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.264295ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16725965213138843811 > lease_revoke:<id:681e90be4eae772e>","response":"size:27"}
	{"level":"info","ts":"2024-07-17T01:37:50.821237Z","caller":"traceutil/trace.go:171","msg":"trace[545432555] transaction","detail":"{read_only:false; response_revision:975; number_of_response:1; }","duration":"208.478805ms","start":"2024-07-17T01:37:50.612708Z","end":"2024-07-17T01:37:50.821187Z","steps":["trace[545432555] 'process raft request'  (duration: 208.321309ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:39:07.85015Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":794}
	{"level":"info","ts":"2024-07-17T01:39:07.860878Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":794,"took":"9.907457ms","hash":2571012677,"current-db-size-bytes":2564096,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2564096,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-17T01:39:07.860978Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2571012677,"revision":794,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:44:07.041853Z","caller":"traceutil/trace.go:171","msg":"trace[830321276] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"211.616347ms","start":"2024-07-17T01:44:06.830194Z","end":"2024-07-17T01:44:07.041811Z","steps":["trace[830321276] 'process raft request'  (duration: 211.410973ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:44:07.990202Z","caller":"traceutil/trace.go:171","msg":"trace[1552353842] transaction","detail":"{read_only:false; response_revision:1281; number_of_response:1; }","duration":"138.525161ms","start":"2024-07-17T01:44:07.851661Z","end":"2024-07-17T01:44:07.990186Z","steps":["trace[1552353842] 'process raft request'  (duration: 75.054828ms)","trace[1552353842] 'compare'  (duration: 63.400868ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T01:44:08.109297Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1037}
	{"level":"warn","ts":"2024-07-17T01:44:08.109786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.4636ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16725965213138848890 username:\"kube-apiserver-etcd-client\" auth_revision:1 > compaction:<revision:1037 > ","response":"size:5"}
	{"level":"info","ts":"2024-07-17T01:44:08.109878Z","caller":"traceutil/trace.go:171","msg":"trace[348004077] compact","detail":"{revision:1037; response_revision:1281; }","duration":"118.37917ms","start":"2024-07-17T01:44:07.99149Z","end":"2024-07-17T01:44:08.109869Z","steps":["trace[348004077] 'check and update compact revision'  (duration: 114.332473ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:44:08.173142Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1037,"took":"63.532234ms","hash":3253904568,"current-db-size-bytes":2564096,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-17T01:44:08.173283Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3253904568,"revision":1037,"compact-revision":794}
	{"level":"info","ts":"2024-07-17T01:45:47.842644Z","caller":"traceutil/trace.go:171","msg":"trace[1273012187] transaction","detail":"{read_only:false; response_revision:1363; number_of_response:1; }","duration":"111.587985ms","start":"2024-07-17T01:45:47.731028Z","end":"2024-07-17T01:45:47.842616Z","steps":["trace[1273012187] 'process raft request'  (duration: 111.468073ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:47:54.696648Z","caller":"traceutil/trace.go:171","msg":"trace[265116275] transaction","detail":"{read_only:false; response_revision:1465; number_of_response:1; }","duration":"110.591386ms","start":"2024-07-17T01:47:54.586018Z","end":"2024-07-17T01:47:54.696609Z","steps":["trace[265116275] 'process raft request'  (duration: 110.231529ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:49:08.120508Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1281}
	{"level":"info","ts":"2024-07-17T01:49:08.125209Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1281,"took":"4.358681ms","hash":3103451738,"current-db-size-bytes":2564096,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1552384,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-17T01:49:08.125284Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3103451738,"revision":1281,"compact-revision":1037}
	{"level":"warn","ts":"2024-07-17T01:49:38.170475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.095846ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16725965213138850518 > lease_revoke:<id:681e90be4eae9281>","response":"size:27"}
	
	
	==> kernel <==
	 01:49:54 up 21 min,  0 users,  load average: 0.22, 0.16, 0.10
	Linux embed-certs-484167 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d8d11986de4668d899fff7000367e9fa6c320088aa5030d8f227942b85a57026] <==
	I0717 01:44:10.235423       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:45:10.235334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:45:10.235461       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:45:10.235508       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:45:10.235558       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:45:10.235613       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:45:10.237445       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:47:10.235621       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:47:10.235952       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:47:10.236034       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:47:10.237688       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:47:10.237766       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:47:10.237774       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:49:09.239501       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:49:09.239835       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 01:49:10.241003       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:49:10.241054       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:49:10.241063       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:49:10.241101       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:49:10.241150       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:49:10.242346       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b9c4b4f6e05b295c38fc3531fc15b7772030b823b29be32bd31f31784d0b1d7c] <==
	E0717 01:44:24.337550       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:44:25.022929       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:44:54.346439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:44:55.041097       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:45:22.951793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="323.123µs"
	E0717 01:45:24.354215       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:45:25.052881       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:45:37.947661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="121.63µs"
	E0717 01:45:54.361002       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:45:55.061624       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:46:24.368219       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:46:25.069572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:46:54.373871       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:46:55.091313       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:47:24.378720       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:47:25.100550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:47:54.384604       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:47:55.109576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:48:24.390262       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:48:25.117325       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:48:54.396022       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:48:55.131938       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:49:24.407935       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:49:25.140107       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:49:54.417612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	
	
	==> kube-proxy [2bad298334c162a0a0d06da2d88a31120a66f2735ed678b46ccc30419636d364] <==
	I0717 01:29:10.571655       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:29:10.582119       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.48"]
	I0717 01:29:10.618911       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:29:10.618953       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:29:10.619008       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:29:10.621543       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:29:10.621805       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:29:10.621829       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:29:10.623123       1 config.go:192] "Starting service config controller"
	I0717 01:29:10.623160       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:29:10.623185       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:29:10.623189       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:29:10.623780       1 config.go:319] "Starting node config controller"
	I0717 01:29:10.623806       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:29:10.723529       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:29:10.723621       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:29:10.723873       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [98433f2cdcf43317c398f244d9677a19617e3068b583d7223413e9692da37802] <==
	I0717 01:29:06.364868       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:29:09.149828       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:29:09.149986       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:29:09.150080       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:29:09.150111       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:29:09.192931       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:29:09.193086       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:29:09.206099       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:29:09.208242       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:29:09.208297       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:29:09.208334       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:29:09.309478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:47:04 embed-certs-484167 kubelet[934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:47:05 embed-certs-484167 kubelet[934]: E0717 01:47:05.934063     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:47:18 embed-certs-484167 kubelet[934]: E0717 01:47:18.934820     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:47:33 embed-certs-484167 kubelet[934]: E0717 01:47:33.933944     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:47:45 embed-certs-484167 kubelet[934]: E0717 01:47:45.933787     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:48:00 embed-certs-484167 kubelet[934]: E0717 01:48:00.933618     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:48:04 embed-certs-484167 kubelet[934]: E0717 01:48:04.959797     934 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:48:04 embed-certs-484167 kubelet[934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:48:04 embed-certs-484167 kubelet[934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:48:04 embed-certs-484167 kubelet[934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:48:04 embed-certs-484167 kubelet[934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:48:11 embed-certs-484167 kubelet[934]: E0717 01:48:11.933862     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:48:24 embed-certs-484167 kubelet[934]: E0717 01:48:24.934515     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:48:37 embed-certs-484167 kubelet[934]: E0717 01:48:37.935305     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:48:51 embed-certs-484167 kubelet[934]: E0717 01:48:51.934653     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:49:03 embed-certs-484167 kubelet[934]: E0717 01:49:03.935454     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:49:04 embed-certs-484167 kubelet[934]: E0717 01:49:04.956118     934 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:49:04 embed-certs-484167 kubelet[934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:49:04 embed-certs-484167 kubelet[934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:49:04 embed-certs-484167 kubelet[934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:49:04 embed-certs-484167 kubelet[934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:49:14 embed-certs-484167 kubelet[934]: E0717 01:49:14.934541     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:49:27 embed-certs-484167 kubelet[934]: E0717 01:49:27.933541     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:49:39 embed-certs-484167 kubelet[934]: E0717 01:49:39.934978     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	Jul 17 01:49:50 embed-certs-484167 kubelet[934]: E0717 01:49:50.934475     934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2qwf6" podUID="caefc20d-d993-46cb-b815-e4ae30ce4e85"
	
	
	==> storage-provisioner [a425272031e798b7f4314ce690384ffd7bed07f3adf6cea14156f2bbc80ce185] <==
	I0717 01:29:41.267596       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:29:41.279238       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:29:41.279312       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:29:58.678058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:29:58.678222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-484167_47d48a8e-425f-4307-803e-6d7e5fd0690c!
	I0717 01:29:58.679652       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d2ee878-b2ac-4f2d-a5aa-b2ff6d096a10", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-484167_47d48a8e-425f-4307-803e-6d7e5fd0690c became leader
	I0717 01:29:58.778949       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-484167_47d48a8e-425f-4307-803e-6d7e5fd0690c!
	
	
	==> storage-provisioner [dc597519e45ca19c27ea84eec31c3ece35ae704699cc5270a53a2ffe7ed44272] <==
	I0717 01:29:10.535645       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:29:40.538575       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-484167 -n embed-certs-484167
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-484167 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-2qwf6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-484167 describe pod metrics-server-569cc877fc-2qwf6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-484167 describe pod metrics-server-569cc877fc-2qwf6: exit status 1 (61.238413ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-2qwf6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-484167 describe pod metrics-server-569cc877fc-2qwf6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (436.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (473.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 01:50:54.472301685 +0000 UTC m=+6385.276447512
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-945694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-945694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.549µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-945694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-945694 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-945694 logs -n 25: (1.426071927s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo cat                           | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo                               | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo find                          | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-453036 sudo crio                          | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kindnet-453036                                    | kindnet-453036        | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	| start   | -p calico-453036 --memory=3072                       | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:50 UTC |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| delete  | -p embed-certs-484167                                | embed-certs-484167    | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC | 17 Jul 24 01:49 UTC |
	| start   | -p custom-flannel-453036                             | custom-flannel-453036 | jenkins | v1.33.1 | 17 Jul 24 01:49 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p calico-453036 pgrep -a                            | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo cat                            | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | /etc/resolv.conf                                     |                       |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo crictl                         | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | pods                                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo crictl                         | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | ps --all                                             |                       |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo find                           | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 01:50 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-453036 sudo ip a s                         | calico-453036         | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:49:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:49:56.252584   78395 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:49:56.252727   78395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:49:56.252739   78395 out.go:304] Setting ErrFile to fd 2...
	I0717 01:49:56.252745   78395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:49:56.252934   78395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:49:56.253586   78395 out.go:298] Setting JSON to false
	I0717 01:49:56.254518   78395 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9145,"bootTime":1721171851,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:49:56.254575   78395 start.go:139] virtualization: kvm guest
	I0717 01:49:56.256506   78395 out.go:177] * [custom-flannel-453036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:49:56.257578   78395 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:49:56.257649   78395 notify.go:220] Checking for updates...
	I0717 01:49:56.259620   78395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:49:56.260626   78395 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:49:56.261670   78395 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:49:56.262696   78395 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:49:56.263666   78395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:49:56.265099   78395 config.go:182] Loaded profile config "calico-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:49:56.265204   78395 config.go:182] Loaded profile config "default-k8s-diff-port-945694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:49:56.265283   78395 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:49:56.265349   78395 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:49:56.300279   78395 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:49:56.301404   78395 start.go:297] selected driver: kvm2
	I0717 01:49:56.301422   78395 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:49:56.301437   78395 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:49:56.302121   78395 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:49:56.302189   78395 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:49:56.317952   78395 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:49:56.317995   78395 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:49:56.318225   78395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:49:56.318285   78395 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0717 01:49:56.318298   78395 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0717 01:49:56.318346   78395 start.go:340] cluster config:
	{Name:custom-flannel-453036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:49:56.318433   78395 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:49:56.319997   78395 out.go:177] * Starting "custom-flannel-453036" primary control-plane node in "custom-flannel-453036" cluster
	I0717 01:49:56.321073   78395 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:49:56.321103   78395 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:49:56.321124   78395 cache.go:56] Caching tarball of preloaded images
	I0717 01:49:56.321193   78395 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:49:56.321205   78395 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:49:56.321296   78395 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/config.json ...
	I0717 01:49:56.321319   78395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/config.json: {Name:mkb4acfc52aab822583c1ecd975e5b11a6badd3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:49:56.321459   78395 start.go:360] acquireMachinesLock for custom-flannel-453036: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:49:56.321507   78395 start.go:364] duration metric: took 32.486µs to acquireMachinesLock for "custom-flannel-453036"
	I0717 01:49:56.321531   78395 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:49:56.321618   78395 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:49:54.085150   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:54.584151   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:55.084966   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:55.584479   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:56.084210   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:56.584764   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:57.084517   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:57.584457   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:58.084409   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:58.584265   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:56.323904   78395 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 01:49:56.324080   78395 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:49:56.324127   78395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:49:56.339588   78395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I0717 01:49:56.340040   78395 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:49:56.340618   78395 main.go:141] libmachine: Using API Version  1
	I0717 01:49:56.340639   78395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:49:56.340932   78395 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:49:56.341101   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetMachineName
	I0717 01:49:56.341254   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .DriverName
	I0717 01:49:56.341384   78395 start.go:159] libmachine.API.Create for "custom-flannel-453036" (driver="kvm2")
	I0717 01:49:56.341407   78395 client.go:168] LocalClient.Create starting
	I0717 01:49:56.341433   78395 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 01:49:56.341459   78395 main.go:141] libmachine: Decoding PEM data...
	I0717 01:49:56.341472   78395 main.go:141] libmachine: Parsing certificate...
	I0717 01:49:56.341526   78395 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 01:49:56.341543   78395 main.go:141] libmachine: Decoding PEM data...
	I0717 01:49:56.341553   78395 main.go:141] libmachine: Parsing certificate...
	I0717 01:49:56.341567   78395 main.go:141] libmachine: Running pre-create checks...
	I0717 01:49:56.341575   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .PreCreateCheck
	I0717 01:49:56.341950   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetConfigRaw
	I0717 01:49:56.342368   78395 main.go:141] libmachine: Creating machine...
	I0717 01:49:56.342381   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .Create
	I0717 01:49:56.342533   78395 main.go:141] libmachine: (custom-flannel-453036) Creating KVM machine...
	I0717 01:49:56.343839   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found existing default KVM network
	I0717 01:49:56.345076   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:56.344917   78418 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:26:93} reservation:<nil>}
	I0717 01:49:56.346058   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:56.345966   78418 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:6c:f9} reservation:<nil>}
	I0717 01:49:56.347105   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:56.347006   78418 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:e0:5d:17} reservation:<nil>}
	I0717 01:49:56.348454   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:56.348352   78418 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289aa0}
	I0717 01:49:56.348482   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | created network xml: 
	I0717 01:49:56.348496   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | <network>
	I0717 01:49:56.348505   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |   <name>mk-custom-flannel-453036</name>
	I0717 01:49:56.348514   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |   <dns enable='no'/>
	I0717 01:49:56.348520   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |   
	I0717 01:49:56.348530   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0717 01:49:56.348540   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |     <dhcp>
	I0717 01:49:56.348551   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0717 01:49:56.348579   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |     </dhcp>
	I0717 01:49:56.348588   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |   </ip>
	I0717 01:49:56.348600   78395 main.go:141] libmachine: (custom-flannel-453036) DBG |   
	I0717 01:49:56.348608   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | </network>
	I0717 01:49:56.348615   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | 
	I0717 01:49:56.353729   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | trying to create private KVM network mk-custom-flannel-453036 192.168.72.0/24...
	I0717 01:49:56.425796   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | private KVM network mk-custom-flannel-453036 192.168.72.0/24 created
	I0717 01:49:56.425823   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:56.425774   78418 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:49:56.425837   78395 main.go:141] libmachine: (custom-flannel-453036) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036 ...
	I0717 01:49:56.425855   78395 main.go:141] libmachine: (custom-flannel-453036) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 01:49:56.425973   78395 main.go:141] libmachine: (custom-flannel-453036) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 01:49:56.678918   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:56.678764   78418 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa...
	I0717 01:49:57.014772   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:57.014658   78418 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/custom-flannel-453036.rawdisk...
	I0717 01:49:57.014803   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Writing magic tar header
	I0717 01:49:57.014819   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Writing SSH key tar header
	I0717 01:49:57.014830   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:57.014764   78418 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036 ...
	I0717 01:49:57.014844   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036
	I0717 01:49:57.014899   78395 main.go:141] libmachine: (custom-flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036 (perms=drwx------)
	I0717 01:49:57.014953   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 01:49:57.014968   78395 main.go:141] libmachine: (custom-flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:49:57.014986   78395 main.go:141] libmachine: (custom-flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 01:49:57.015001   78395 main.go:141] libmachine: (custom-flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 01:49:57.015032   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:49:57.015066   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 01:49:57.015083   78395 main.go:141] libmachine: (custom-flannel-453036) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:49:57.015096   78395 main.go:141] libmachine: (custom-flannel-453036) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:49:57.015106   78395 main.go:141] libmachine: (custom-flannel-453036) Creating domain...
	I0717 01:49:57.015121   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:49:57.015140   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:49:57.015156   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Checking permissions on dir: /home
	I0717 01:49:57.015169   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Skipping /home - not owner
	I0717 01:49:57.016202   78395 main.go:141] libmachine: (custom-flannel-453036) define libvirt domain using xml: 
	I0717 01:49:57.016224   78395 main.go:141] libmachine: (custom-flannel-453036) <domain type='kvm'>
	I0717 01:49:57.016234   78395 main.go:141] libmachine: (custom-flannel-453036)   <name>custom-flannel-453036</name>
	I0717 01:49:57.016243   78395 main.go:141] libmachine: (custom-flannel-453036)   <memory unit='MiB'>3072</memory>
	I0717 01:49:57.016251   78395 main.go:141] libmachine: (custom-flannel-453036)   <vcpu>2</vcpu>
	I0717 01:49:57.016258   78395 main.go:141] libmachine: (custom-flannel-453036)   <features>
	I0717 01:49:57.016276   78395 main.go:141] libmachine: (custom-flannel-453036)     <acpi/>
	I0717 01:49:57.016287   78395 main.go:141] libmachine: (custom-flannel-453036)     <apic/>
	I0717 01:49:57.016295   78395 main.go:141] libmachine: (custom-flannel-453036)     <pae/>
	I0717 01:49:57.016305   78395 main.go:141] libmachine: (custom-flannel-453036)     
	I0717 01:49:57.016322   78395 main.go:141] libmachine: (custom-flannel-453036)   </features>
	I0717 01:49:57.016333   78395 main.go:141] libmachine: (custom-flannel-453036)   <cpu mode='host-passthrough'>
	I0717 01:49:57.016338   78395 main.go:141] libmachine: (custom-flannel-453036)   
	I0717 01:49:57.016342   78395 main.go:141] libmachine: (custom-flannel-453036)   </cpu>
	I0717 01:49:57.016350   78395 main.go:141] libmachine: (custom-flannel-453036)   <os>
	I0717 01:49:57.016355   78395 main.go:141] libmachine: (custom-flannel-453036)     <type>hvm</type>
	I0717 01:49:57.016360   78395 main.go:141] libmachine: (custom-flannel-453036)     <boot dev='cdrom'/>
	I0717 01:49:57.016364   78395 main.go:141] libmachine: (custom-flannel-453036)     <boot dev='hd'/>
	I0717 01:49:57.016370   78395 main.go:141] libmachine: (custom-flannel-453036)     <bootmenu enable='no'/>
	I0717 01:49:57.016374   78395 main.go:141] libmachine: (custom-flannel-453036)   </os>
	I0717 01:49:57.016379   78395 main.go:141] libmachine: (custom-flannel-453036)   <devices>
	I0717 01:49:57.016395   78395 main.go:141] libmachine: (custom-flannel-453036)     <disk type='file' device='cdrom'>
	I0717 01:49:57.016408   78395 main.go:141] libmachine: (custom-flannel-453036)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/boot2docker.iso'/>
	I0717 01:49:57.016420   78395 main.go:141] libmachine: (custom-flannel-453036)       <target dev='hdc' bus='scsi'/>
	I0717 01:49:57.016441   78395 main.go:141] libmachine: (custom-flannel-453036)       <readonly/>
	I0717 01:49:57.016459   78395 main.go:141] libmachine: (custom-flannel-453036)     </disk>
	I0717 01:49:57.016471   78395 main.go:141] libmachine: (custom-flannel-453036)     <disk type='file' device='disk'>
	I0717 01:49:57.016485   78395 main.go:141] libmachine: (custom-flannel-453036)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:49:57.016516   78395 main.go:141] libmachine: (custom-flannel-453036)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/custom-flannel-453036.rawdisk'/>
	I0717 01:49:57.016551   78395 main.go:141] libmachine: (custom-flannel-453036)       <target dev='hda' bus='virtio'/>
	I0717 01:49:57.016605   78395 main.go:141] libmachine: (custom-flannel-453036)     </disk>
	I0717 01:49:57.016627   78395 main.go:141] libmachine: (custom-flannel-453036)     <interface type='network'>
	I0717 01:49:57.016641   78395 main.go:141] libmachine: (custom-flannel-453036)       <source network='mk-custom-flannel-453036'/>
	I0717 01:49:57.016652   78395 main.go:141] libmachine: (custom-flannel-453036)       <model type='virtio'/>
	I0717 01:49:57.016662   78395 main.go:141] libmachine: (custom-flannel-453036)     </interface>
	I0717 01:49:57.016686   78395 main.go:141] libmachine: (custom-flannel-453036)     <interface type='network'>
	I0717 01:49:57.016712   78395 main.go:141] libmachine: (custom-flannel-453036)       <source network='default'/>
	I0717 01:49:57.016740   78395 main.go:141] libmachine: (custom-flannel-453036)       <model type='virtio'/>
	I0717 01:49:57.016753   78395 main.go:141] libmachine: (custom-flannel-453036)     </interface>
	I0717 01:49:57.016766   78395 main.go:141] libmachine: (custom-flannel-453036)     <serial type='pty'>
	I0717 01:49:57.016777   78395 main.go:141] libmachine: (custom-flannel-453036)       <target port='0'/>
	I0717 01:49:57.016789   78395 main.go:141] libmachine: (custom-flannel-453036)     </serial>
	I0717 01:49:57.016803   78395 main.go:141] libmachine: (custom-flannel-453036)     <console type='pty'>
	I0717 01:49:57.016822   78395 main.go:141] libmachine: (custom-flannel-453036)       <target type='serial' port='0'/>
	I0717 01:49:57.016835   78395 main.go:141] libmachine: (custom-flannel-453036)     </console>
	I0717 01:49:57.016848   78395 main.go:141] libmachine: (custom-flannel-453036)     <rng model='virtio'>
	I0717 01:49:57.016896   78395 main.go:141] libmachine: (custom-flannel-453036)       <backend model='random'>/dev/random</backend>
	I0717 01:49:57.016913   78395 main.go:141] libmachine: (custom-flannel-453036)     </rng>
	I0717 01:49:57.016926   78395 main.go:141] libmachine: (custom-flannel-453036)     
	I0717 01:49:57.016938   78395 main.go:141] libmachine: (custom-flannel-453036)     
	I0717 01:49:57.016951   78395 main.go:141] libmachine: (custom-flannel-453036)   </devices>
	I0717 01:49:57.016962   78395 main.go:141] libmachine: (custom-flannel-453036) </domain>
	I0717 01:49:57.016978   78395 main.go:141] libmachine: (custom-flannel-453036) 
	I0717 01:49:57.020625   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:79:2f:1f in network default
	I0717 01:49:57.021349   78395 main.go:141] libmachine: (custom-flannel-453036) Ensuring networks are active...
	I0717 01:49:57.021370   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:49:57.022053   78395 main.go:141] libmachine: (custom-flannel-453036) Ensuring network default is active
	I0717 01:49:57.022356   78395 main.go:141] libmachine: (custom-flannel-453036) Ensuring network mk-custom-flannel-453036 is active
	I0717 01:49:57.022817   78395 main.go:141] libmachine: (custom-flannel-453036) Getting domain xml...
	I0717 01:49:57.023531   78395 main.go:141] libmachine: (custom-flannel-453036) Creating domain...
	I0717 01:49:58.282137   78395 main.go:141] libmachine: (custom-flannel-453036) Waiting to get IP...
	I0717 01:49:58.283205   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:49:58.283764   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:49:58.283791   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:58.283744   78418 retry.go:31] will retry after 209.977399ms: waiting for machine to come up
	I0717 01:49:58.495170   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:49:58.495781   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:49:58.495806   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:58.495748   78418 retry.go:31] will retry after 305.445937ms: waiting for machine to come up
	I0717 01:49:58.803351   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:49:58.803968   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:49:58.803989   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:58.803918   78418 retry.go:31] will retry after 422.374758ms: waiting for machine to come up
	I0717 01:49:59.227387   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:49:59.227801   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:49:59.227832   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:59.227760   78418 retry.go:31] will retry after 477.407559ms: waiting for machine to come up
	I0717 01:49:59.706300   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:49:59.706803   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:49:59.706825   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:49:59.706741   78418 retry.go:31] will retry after 731.938237ms: waiting for machine to come up
	I0717 01:50:00.440730   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:00.441258   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:00.441282   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:00.441216   78418 retry.go:31] will retry after 668.608664ms: waiting for machine to come up
	I0717 01:50:01.111696   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:01.112359   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:01.112388   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:01.112309   78418 retry.go:31] will retry after 944.762776ms: waiting for machine to come up
	I0717 01:49:59.084600   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:49:59.584639   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:00.084999   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:00.584712   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:01.084597   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:01.584453   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:02.084509   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:02.584583   77834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:02.689833   77834 kubeadm.go:1113] duration metric: took 11.741893081s to wait for elevateKubeSystemPrivileges
	I0717 01:50:02.689877   77834 kubeadm.go:394] duration metric: took 24.857630659s to StartCluster
	I0717 01:50:02.689898   77834 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:02.690017   77834 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:50:02.691780   77834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:02.692018   77834 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:50:02.692031   77834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 01:50:02.692166   77834 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:50:02.692235   77834 addons.go:69] Setting storage-provisioner=true in profile "calico-453036"
	I0717 01:50:02.692242   77834 config.go:182] Loaded profile config "calico-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:50:02.692259   77834 addons.go:234] Setting addon storage-provisioner=true in "calico-453036"
	I0717 01:50:02.692266   77834 addons.go:69] Setting default-storageclass=true in profile "calico-453036"
	I0717 01:50:02.692305   77834 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-453036"
	I0717 01:50:02.692323   77834 host.go:66] Checking if "calico-453036" exists ...
	I0717 01:50:02.692747   77834 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:50:02.692782   77834 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:50:02.692803   77834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:50:02.692823   77834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:50:02.693509   77834 out.go:177] * Verifying Kubernetes components...
	I0717 01:50:02.694994   77834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:50:02.710013   77834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42205
	I0717 01:50:02.710512   77834 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:50:02.711091   77834 main.go:141] libmachine: Using API Version  1
	I0717 01:50:02.711121   77834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:50:02.711513   77834 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:50:02.712120   77834 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:50:02.712145   77834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:50:02.712924   77834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34749
	I0717 01:50:02.713543   77834 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:50:02.714046   77834 main.go:141] libmachine: Using API Version  1
	I0717 01:50:02.714063   77834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:50:02.714483   77834 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:50:02.714735   77834 main.go:141] libmachine: (calico-453036) Calling .GetState
	I0717 01:50:02.718537   77834 addons.go:234] Setting addon default-storageclass=true in "calico-453036"
	I0717 01:50:02.718576   77834 host.go:66] Checking if "calico-453036" exists ...
	I0717 01:50:02.718959   77834 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:50:02.718976   77834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:50:02.729544   77834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34789
	I0717 01:50:02.730227   77834 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:50:02.730841   77834 main.go:141] libmachine: Using API Version  1
	I0717 01:50:02.730871   77834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:50:02.731192   77834 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:50:02.731360   77834 main.go:141] libmachine: (calico-453036) Calling .GetState
	I0717 01:50:02.733294   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:50:02.734998   77834 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:50:02.736322   77834 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:50:02.736342   77834 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:50:02.736363   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:50:02.740100   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:50:02.740689   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:50:02.740717   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:50:02.740996   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:50:02.741405   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:50:02.741586   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:50:02.741709   77834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41543
	I0717 01:50:02.741709   77834 sshutil.go:53] new ssh client: &{IP:192.168.61.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa Username:docker}
	I0717 01:50:02.742106   77834 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:50:02.742692   77834 main.go:141] libmachine: Using API Version  1
	I0717 01:50:02.742707   77834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:50:02.742996   77834 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:50:02.743540   77834 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:50:02.743567   77834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:50:02.760130   77834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I0717 01:50:02.760713   77834 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:50:02.761243   77834 main.go:141] libmachine: Using API Version  1
	I0717 01:50:02.761261   77834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:50:02.761823   77834 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:50:02.761987   77834 main.go:141] libmachine: (calico-453036) Calling .GetState
	I0717 01:50:02.764066   77834 main.go:141] libmachine: (calico-453036) Calling .DriverName
	I0717 01:50:02.764346   77834 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:50:02.764362   77834 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:50:02.764379   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHHostname
	I0717 01:50:02.767827   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:50:02.767854   77834 main.go:141] libmachine: (calico-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:51:6b", ip: ""} in network mk-calico-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:49:23 +0000 UTC Type:0 Mac:52:54:00:95:51:6b Iaid: IPaddr:192.168.61.27 Prefix:24 Hostname:calico-453036 Clientid:01:52:54:00:95:51:6b}
	I0717 01:50:02.767872   77834 main.go:141] libmachine: (calico-453036) DBG | domain calico-453036 has defined IP address 192.168.61.27 and MAC address 52:54:00:95:51:6b in network mk-calico-453036
	I0717 01:50:02.767915   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHPort
	I0717 01:50:02.768123   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHKeyPath
	I0717 01:50:02.768266   77834 main.go:141] libmachine: (calico-453036) Calling .GetSSHUsername
	I0717 01:50:02.768494   77834 sshutil.go:53] new ssh client: &{IP:192.168.61.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/calico-453036/id_rsa Username:docker}
	I0717 01:50:03.157056   77834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 01:50:03.157059   77834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:50:03.167799   77834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:50:03.178585   77834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:50:03.795599   77834 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 01:50:03.796867   77834 node_ready.go:35] waiting up to 15m0s for node "calico-453036" to be "Ready" ...
	I0717 01:50:04.184147   77834 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.016301375s)
	I0717 01:50:04.184176   77834 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.00556207s)
	I0717 01:50:04.184212   77834 main.go:141] libmachine: Making call to close driver server
	I0717 01:50:04.184220   77834 main.go:141] libmachine: Making call to close driver server
	I0717 01:50:04.184239   77834 main.go:141] libmachine: (calico-453036) Calling .Close
	I0717 01:50:04.184228   77834 main.go:141] libmachine: (calico-453036) Calling .Close
	I0717 01:50:04.184573   77834 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:50:04.184585   77834 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:50:04.184594   77834 main.go:141] libmachine: Making call to close driver server
	I0717 01:50:04.184602   77834 main.go:141] libmachine: (calico-453036) Calling .Close
	I0717 01:50:04.186604   77834 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:50:04.186623   77834 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:50:04.186646   77834 main.go:141] libmachine: (calico-453036) DBG | Closing plugin on server side
	I0717 01:50:04.186651   77834 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:50:04.186666   77834 main.go:141] libmachine: Making call to close driver server
	I0717 01:50:04.186667   77834 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:50:04.186683   77834 main.go:141] libmachine: (calico-453036) Calling .Close
	I0717 01:50:04.186896   77834 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:50:04.186916   77834 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:50:04.186918   77834 main.go:141] libmachine: (calico-453036) DBG | Closing plugin on server side
	I0717 01:50:04.203838   77834 main.go:141] libmachine: Making call to close driver server
	I0717 01:50:04.203866   77834 main.go:141] libmachine: (calico-453036) Calling .Close
	I0717 01:50:04.204172   77834 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:50:04.204189   77834 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:50:04.205657   77834 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 01:50:02.058509   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:02.059110   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:02.059159   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:02.059063   78418 retry.go:31] will retry after 903.647158ms: waiting for machine to come up
	I0717 01:50:02.964339   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:02.964897   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:02.964924   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:02.964851   78418 retry.go:31] will retry after 1.732452047s: waiting for machine to come up
	I0717 01:50:04.699151   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:04.699674   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:04.699704   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:04.699626   78418 retry.go:31] will retry after 1.909836712s: waiting for machine to come up
	I0717 01:50:04.206932   77834 addons.go:510] duration metric: took 1.514767173s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 01:50:04.304348   77834 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-453036" context rescaled to 1 replicas
	I0717 01:50:05.802869   77834 node_ready.go:53] node "calico-453036" has status "Ready":"False"
	I0717 01:50:08.360259   77834 node_ready.go:53] node "calico-453036" has status "Ready":"False"
	I0717 01:50:06.611796   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:06.612468   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:06.612527   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:06.612457   78418 retry.go:31] will retry after 2.666175103s: waiting for machine to come up
	I0717 01:50:09.281911   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:09.282432   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:09.282455   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:09.282403   78418 retry.go:31] will retry after 2.651168913s: waiting for machine to come up
	I0717 01:50:10.300704   77834 node_ready.go:49] node "calico-453036" has status "Ready":"True"
	I0717 01:50:10.300729   77834 node_ready.go:38] duration metric: took 6.503837541s for node "calico-453036" to be "Ready" ...
	I0717 01:50:10.300740   77834 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:50:10.310455   77834 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:12.317217   77834 pod_ready.go:102] pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:50:11.934957   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:11.935459   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:11.935487   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:11.935423   78418 retry.go:31] will retry after 2.768339405s: waiting for machine to come up
	I0717 01:50:14.705996   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:14.706464   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find current IP address of domain custom-flannel-453036 in network mk-custom-flannel-453036
	I0717 01:50:14.706489   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | I0717 01:50:14.706428   78418 retry.go:31] will retry after 5.404265957s: waiting for machine to come up
	I0717 01:50:14.317596   77834 pod_ready.go:102] pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:50:16.317681   77834 pod_ready.go:102] pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:50:20.114604   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:20.115244   78395 main.go:141] libmachine: (custom-flannel-453036) Found IP for machine: 192.168.72.187
	I0717 01:50:20.115266   78395 main.go:141] libmachine: (custom-flannel-453036) Reserving static IP address...
	I0717 01:50:20.115281   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has current primary IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:20.115663   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find host DHCP lease matching {name: "custom-flannel-453036", mac: "52:54:00:d3:42:42", ip: "192.168.72.187"} in network mk-custom-flannel-453036
	I0717 01:50:20.190915   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Getting to WaitForSSH function...
	I0717 01:50:20.190949   78395 main.go:141] libmachine: (custom-flannel-453036) Reserved static IP address: 192.168.72.187
	I0717 01:50:20.190963   78395 main.go:141] libmachine: (custom-flannel-453036) Waiting for SSH to be available...
	I0717 01:50:20.193533   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:20.193944   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036
	I0717 01:50:20.193977   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | unable to find defined IP address of network mk-custom-flannel-453036 interface with MAC address 52:54:00:d3:42:42
	I0717 01:50:20.194135   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Using SSH client type: external
	I0717 01:50:20.194157   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa (-rw-------)
	I0717 01:50:20.194184   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:50:20.194198   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | About to run SSH command:
	I0717 01:50:20.194211   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | exit 0
	I0717 01:50:20.198492   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | SSH cmd err, output: exit status 255: 
	I0717 01:50:20.198519   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 01:50:20.198528   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | command : exit 0
	I0717 01:50:20.198533   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | err     : exit status 255
	I0717 01:50:20.198541   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | output  : 
	I0717 01:50:18.824879   77834 pod_ready.go:102] pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:50:21.316774   77834 pod_ready.go:102] pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:50:23.317123   77834 pod_ready.go:102] pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:50:23.199525   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Getting to WaitForSSH function...
	I0717 01:50:23.202378   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.202818   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:23.202864   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.202968   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Using SSH client type: external
	I0717 01:50:23.202996   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa (-rw-------)
	I0717 01:50:23.203026   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:50:23.203037   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | About to run SSH command:
	I0717 01:50:23.203054   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | exit 0
	I0717 01:50:23.328839   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | SSH cmd err, output: <nil>: 
	I0717 01:50:23.329064   78395 main.go:141] libmachine: (custom-flannel-453036) KVM machine creation complete!
	I0717 01:50:23.329437   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetConfigRaw
	I0717 01:50:23.329991   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .DriverName
	I0717 01:50:23.330209   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .DriverName
	I0717 01:50:23.330384   78395 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:50:23.330422   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetState
	I0717 01:50:23.332115   78395 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:50:23.332135   78395 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:50:23.332145   78395 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:50:23.332158   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:23.334968   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.335388   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:23.335419   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.335579   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:23.335773   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:23.335962   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:23.336139   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:23.336325   78395 main.go:141] libmachine: Using SSH client type: native
	I0717 01:50:23.336597   78395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0717 01:50:23.336612   78395 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:50:23.444278   78395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:50:23.444301   78395 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:50:23.444312   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:23.447437   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.447860   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:23.447889   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.448077   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:23.448317   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:23.448503   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:23.448674   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:23.448846   78395 main.go:141] libmachine: Using SSH client type: native
	I0717 01:50:23.449065   78395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0717 01:50:23.449078   78395 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:50:23.561830   78395 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:50:23.561931   78395 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:50:23.561947   78395 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:50:23.561956   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetMachineName
	I0717 01:50:23.562224   78395 buildroot.go:166] provisioning hostname "custom-flannel-453036"
	I0717 01:50:23.562248   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetMachineName
	I0717 01:50:23.562457   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:23.565394   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.565709   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:23.565737   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.565926   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:23.566077   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:23.566245   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:23.566409   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:23.566620   78395 main.go:141] libmachine: Using SSH client type: native
	I0717 01:50:23.566784   78395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0717 01:50:23.566797   78395 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-453036 && echo "custom-flannel-453036" | sudo tee /etc/hostname
	I0717 01:50:23.688943   78395 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-453036
	
	I0717 01:50:23.688976   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:23.691574   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.692079   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:23.692111   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.692343   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:23.692509   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:23.692731   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:23.692910   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:23.693062   78395 main.go:141] libmachine: Using SSH client type: native
	I0717 01:50:23.693240   78395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0717 01:50:23.693268   78395 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-453036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-453036/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-453036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:50:23.811253   78395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:50:23.811316   78395 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:50:23.811380   78395 buildroot.go:174] setting up certificates
	I0717 01:50:23.811396   78395 provision.go:84] configureAuth start
	I0717 01:50:23.811417   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetMachineName
	I0717 01:50:23.811740   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetIP
	I0717 01:50:23.814441   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.814839   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:23.814910   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.815223   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:23.818034   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.818457   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:23.818485   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:23.818620   78395 provision.go:143] copyHostCerts
	I0717 01:50:23.818690   78395 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:50:23.818706   78395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:50:23.818785   78395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:50:23.818904   78395 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:50:23.818917   78395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:50:23.818962   78395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:50:23.819032   78395 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:50:23.819042   78395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:50:23.819076   78395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:50:23.819138   78395 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-453036 san=[127.0.0.1 192.168.72.187 custom-flannel-453036 localhost minikube]
	I0717 01:50:24.004483   78395 provision.go:177] copyRemoteCerts
	I0717 01:50:24.004543   78395 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:50:24.004587   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:24.007375   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.007699   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.007723   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.007915   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:24.008118   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:24.008309   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:24.008464   78395 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa Username:docker}
	I0717 01:50:24.091432   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:50:24.117000   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:50:24.142056   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:50:24.167764   78395 provision.go:87] duration metric: took 356.351737ms to configureAuth
	I0717 01:50:24.167787   78395 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:50:24.167974   78395 config.go:182] Loaded profile config "custom-flannel-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:50:24.168051   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:24.170689   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.171049   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.171080   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.171229   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:24.171459   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:24.171658   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:24.171820   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:24.171987   78395 main.go:141] libmachine: Using SSH client type: native
	I0717 01:50:24.172158   78395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0717 01:50:24.172179   78395 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:50:24.453268   78395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:50:24.453289   78395 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:50:24.453297   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetURL
	I0717 01:50:24.454686   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | Using libvirt version 6000000
	I0717 01:50:24.456840   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.457189   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.457220   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.457366   78395 main.go:141] libmachine: Docker is up and running!
	I0717 01:50:24.457383   78395 main.go:141] libmachine: Reticulating splines...
	I0717 01:50:24.457390   78395 client.go:171] duration metric: took 28.115974873s to LocalClient.Create
	I0717 01:50:24.457412   78395 start.go:167] duration metric: took 28.116029101s to libmachine.API.Create "custom-flannel-453036"
	I0717 01:50:24.457421   78395 start.go:293] postStartSetup for "custom-flannel-453036" (driver="kvm2")
	I0717 01:50:24.457429   78395 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:50:24.457453   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .DriverName
	I0717 01:50:24.457662   78395 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:50:24.457684   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:24.459844   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.460211   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.460235   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.460364   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:24.460527   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:24.460692   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:24.460837   78395 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa Username:docker}
	I0717 01:50:24.544865   78395 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:50:24.549559   78395 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:50:24.549582   78395 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:50:24.549641   78395 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:50:24.549724   78395 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:50:24.549843   78395 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:50:24.559794   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:50:24.587900   78395 start.go:296] duration metric: took 130.465112ms for postStartSetup
	I0717 01:50:24.587971   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetConfigRaw
	I0717 01:50:24.588538   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetIP
	I0717 01:50:24.591401   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.591722   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.591758   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.591978   78395 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/config.json ...
	I0717 01:50:24.592273   78395 start.go:128] duration metric: took 28.270644448s to createHost
	I0717 01:50:24.592296   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:24.594769   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.595197   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.595228   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.595428   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:24.595615   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:24.595761   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:24.595956   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:24.596132   78395 main.go:141] libmachine: Using SSH client type: native
	I0717 01:50:24.596354   78395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.187 22 <nil> <nil>}
	I0717 01:50:24.596371   78395 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:50:24.709436   78395 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181024.688434378
	
	I0717 01:50:24.709456   78395 fix.go:216] guest clock: 1721181024.688434378
	I0717 01:50:24.709463   78395 fix.go:229] Guest: 2024-07-17 01:50:24.688434378 +0000 UTC Remote: 2024-07-17 01:50:24.592285817 +0000 UTC m=+28.374536203 (delta=96.148561ms)
	I0717 01:50:24.709495   78395 fix.go:200] guest clock delta is within tolerance: 96.148561ms
	I0717 01:50:24.709499   78395 start.go:83] releasing machines lock for "custom-flannel-453036", held for 28.387981493s
	I0717 01:50:24.709520   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .DriverName
	I0717 01:50:24.709791   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetIP
	I0717 01:50:24.712456   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.712818   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.712854   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.713020   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .DriverName
	I0717 01:50:24.713563   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .DriverName
	I0717 01:50:24.713761   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .DriverName
	I0717 01:50:24.713865   78395 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:50:24.713918   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:24.713982   78395 ssh_runner.go:195] Run: cat /version.json
	I0717 01:50:24.714005   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHHostname
	I0717 01:50:24.716749   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.716950   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.717155   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.717199   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.717334   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:24.717350   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:24.717382   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:24.717526   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:24.717532   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHPort
	I0717 01:50:24.717691   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:24.717698   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHKeyPath
	I0717 01:50:24.717840   78395 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa Username:docker}
	I0717 01:50:24.717907   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetSSHUsername
	I0717 01:50:24.718071   78395 sshutil.go:53] new ssh client: &{IP:192.168.72.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/custom-flannel-453036/id_rsa Username:docker}
	I0717 01:50:24.794380   78395 ssh_runner.go:195] Run: systemctl --version
	I0717 01:50:24.818945   78395 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:50:24.976540   78395 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:50:24.983326   78395 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:50:24.983403   78395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:50:24.999703   78395 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:50:24.999724   78395 start.go:495] detecting cgroup driver to use...
	I0717 01:50:24.999776   78395 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:50:25.017819   78395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:50:25.032081   78395 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:50:25.032146   78395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:50:25.046796   78395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:50:25.060409   78395 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:50:25.181121   78395 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:50:25.349831   78395 docker.go:233] disabling docker service ...
	I0717 01:50:25.349892   78395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:50:25.365072   78395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:50:25.378528   78395 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:50:25.516266   78395 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:50:25.662871   78395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:50:25.676926   78395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:50:25.695104   78395 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:50:25.695159   78395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:50:25.705723   78395 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:50:25.705790   78395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:50:25.716012   78395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:50:25.727003   78395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:50:25.738384   78395 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:50:25.749888   78395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:50:25.762205   78395 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:50:25.779626   78395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:50:25.792170   78395 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:50:25.803500   78395 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:50:25.803564   78395 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:50:25.817398   78395 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:50:25.827750   78395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:50:25.957905   78395 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:50:26.124866   78395 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:50:26.124930   78395 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:50:26.130854   78395 start.go:563] Will wait 60s for crictl version
	I0717 01:50:26.130907   78395 ssh_runner.go:195] Run: which crictl
	I0717 01:50:26.134976   78395 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:50:26.193685   78395 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:50:26.193840   78395 ssh_runner.go:195] Run: crio --version
	I0717 01:50:26.223428   78395 ssh_runner.go:195] Run: crio --version
	I0717 01:50:26.253697   78395 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:50:24.319595   77834 pod_ready.go:92] pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace has status "Ready":"True"
	I0717 01:50:24.319617   77834 pod_ready.go:81] duration metric: took 14.009137415s for pod "calico-kube-controllers-564985c589-pqclt" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:24.319627   77834 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-2xm2z" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:24.327178   77834 pod_ready.go:92] pod "calico-node-2xm2z" in "kube-system" namespace has status "Ready":"True"
	I0717 01:50:24.327205   77834 pod_ready.go:81] duration metric: took 7.570745ms for pod "calico-node-2xm2z" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:24.327214   77834 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-xmzjp" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:26.340583   77834 pod_ready.go:102] pod "coredns-7db6d8ff4d-xmzjp" in "kube-system" namespace has status "Ready":"False"
	I0717 01:50:27.337476   77834 pod_ready.go:92] pod "coredns-7db6d8ff4d-xmzjp" in "kube-system" namespace has status "Ready":"True"
	I0717 01:50:27.337496   77834 pod_ready.go:81] duration metric: took 3.010275793s for pod "coredns-7db6d8ff4d-xmzjp" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.337506   77834 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.351472   77834 pod_ready.go:92] pod "etcd-calico-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:50:27.351500   77834 pod_ready.go:81] duration metric: took 13.986811ms for pod "etcd-calico-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.351513   77834 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.360408   77834 pod_ready.go:92] pod "kube-apiserver-calico-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:50:27.360433   77834 pod_ready.go:81] duration metric: took 8.912301ms for pod "kube-apiserver-calico-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.360446   77834 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.366414   77834 pod_ready.go:92] pod "kube-controller-manager-calico-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:50:27.366437   77834 pod_ready.go:81] duration metric: took 5.9827ms for pod "kube-controller-manager-calico-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.366449   77834 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-rvf97" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.515504   77834 pod_ready.go:92] pod "kube-proxy-rvf97" in "kube-system" namespace has status "Ready":"True"
	I0717 01:50:27.515529   77834 pod_ready.go:81] duration metric: took 149.071914ms for pod "kube-proxy-rvf97" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.515541   77834 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.915853   77834 pod_ready.go:92] pod "kube-scheduler-calico-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:50:27.915877   77834 pod_ready.go:81] duration metric: took 400.327358ms for pod "kube-scheduler-calico-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:50:27.915891   77834 pod_ready.go:38] duration metric: took 17.615137973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:50:27.915906   77834 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:50:27.915960   77834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:50:27.935943   77834 api_server.go:72] duration metric: took 25.243882779s to wait for apiserver process to appear ...
	I0717 01:50:27.935975   77834 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:50:27.935997   77834 api_server.go:253] Checking apiserver healthz at https://192.168.61.27:8443/healthz ...
	I0717 01:50:27.941419   77834 api_server.go:279] https://192.168.61.27:8443/healthz returned 200:
	ok
	I0717 01:50:27.942434   77834 api_server.go:141] control plane version: v1.30.2
	I0717 01:50:27.942456   77834 api_server.go:131] duration metric: took 6.474355ms to wait for apiserver health ...
	I0717 01:50:27.942463   77834 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:50:28.119700   77834 system_pods.go:59] 9 kube-system pods found
	I0717 01:50:28.119753   77834 system_pods.go:61] "calico-kube-controllers-564985c589-pqclt" [9cbd194f-12c6-420b-86aa-036ece9b54f4] Running
	I0717 01:50:28.119761   77834 system_pods.go:61] "calico-node-2xm2z" [2a06885f-4815-46a2-bbbf-c91bef7647c9] Running
	I0717 01:50:28.119767   77834 system_pods.go:61] "coredns-7db6d8ff4d-xmzjp" [fa58be52-1a3a-4a03-a017-bc1e8db03be9] Running
	I0717 01:50:28.119773   77834 system_pods.go:61] "etcd-calico-453036" [a33d395f-6d07-4248-bea8-1358943e9c2f] Running
	I0717 01:50:28.119779   77834 system_pods.go:61] "kube-apiserver-calico-453036" [9d113ed7-22d6-4504-abee-4b98d07ff959] Running
	I0717 01:50:28.119784   77834 system_pods.go:61] "kube-controller-manager-calico-453036" [08a3877e-e6d3-4b7d-87e8-ee2f44873428] Running
	I0717 01:50:28.119789   77834 system_pods.go:61] "kube-proxy-rvf97" [cbc25ee0-2b42-460d-9e8d-dfec83d50f30] Running
	I0717 01:50:28.119796   77834 system_pods.go:61] "kube-scheduler-calico-453036" [1a490ae1-b94d-491d-bce2-7e27ffe905eb] Running
	I0717 01:50:28.119803   77834 system_pods.go:61] "storage-provisioner" [322d3600-530d-4dde-a421-293ac90fa25a] Running
	I0717 01:50:28.119814   77834 system_pods.go:74] duration metric: took 177.346001ms to wait for pod list to return data ...
	I0717 01:50:28.119824   77834 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:50:28.327747   77834 default_sa.go:45] found service account: "default"
	I0717 01:50:28.327787   77834 default_sa.go:55] duration metric: took 207.949487ms for default service account to be created ...
	I0717 01:50:28.327799   77834 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:50:28.521161   77834 system_pods.go:86] 9 kube-system pods found
	I0717 01:50:28.521189   77834 system_pods.go:89] "calico-kube-controllers-564985c589-pqclt" [9cbd194f-12c6-420b-86aa-036ece9b54f4] Running
	I0717 01:50:28.521196   77834 system_pods.go:89] "calico-node-2xm2z" [2a06885f-4815-46a2-bbbf-c91bef7647c9] Running
	I0717 01:50:28.521200   77834 system_pods.go:89] "coredns-7db6d8ff4d-xmzjp" [fa58be52-1a3a-4a03-a017-bc1e8db03be9] Running
	I0717 01:50:28.521204   77834 system_pods.go:89] "etcd-calico-453036" [a33d395f-6d07-4248-bea8-1358943e9c2f] Running
	I0717 01:50:28.521208   77834 system_pods.go:89] "kube-apiserver-calico-453036" [9d113ed7-22d6-4504-abee-4b98d07ff959] Running
	I0717 01:50:28.521213   77834 system_pods.go:89] "kube-controller-manager-calico-453036" [08a3877e-e6d3-4b7d-87e8-ee2f44873428] Running
	I0717 01:50:28.521216   77834 system_pods.go:89] "kube-proxy-rvf97" [cbc25ee0-2b42-460d-9e8d-dfec83d50f30] Running
	I0717 01:50:28.521221   77834 system_pods.go:89] "kube-scheduler-calico-453036" [1a490ae1-b94d-491d-bce2-7e27ffe905eb] Running
	I0717 01:50:28.521225   77834 system_pods.go:89] "storage-provisioner" [322d3600-530d-4dde-a421-293ac90fa25a] Running
	I0717 01:50:28.521233   77834 system_pods.go:126] duration metric: took 193.426193ms to wait for k8s-apps to be running ...
	I0717 01:50:28.521241   77834 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:50:28.521292   77834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:50:28.536678   77834 system_svc.go:56] duration metric: took 15.427329ms WaitForService to wait for kubelet
	I0717 01:50:28.536717   77834 kubeadm.go:582] duration metric: took 25.84465818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:50:28.536742   77834 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:50:28.715766   77834 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:50:28.715803   77834 node_conditions.go:123] node cpu capacity is 2
	I0717 01:50:28.715821   77834 node_conditions.go:105] duration metric: took 179.07082ms to run NodePressure ...
	I0717 01:50:28.715835   77834 start.go:241] waiting for startup goroutines ...
	I0717 01:50:28.715845   77834 start.go:246] waiting for cluster config update ...
	I0717 01:50:28.715858   77834 start.go:255] writing updated cluster config ...
	I0717 01:50:28.737374   77834 ssh_runner.go:195] Run: rm -f paused
	I0717 01:50:28.788756   77834 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:50:28.804230   77834 out.go:177] * Done! kubectl is now configured to use "calico-453036" cluster and "default" namespace by default
	I0717 01:50:26.255108   78395 main.go:141] libmachine: (custom-flannel-453036) Calling .GetIP
	I0717 01:50:26.258350   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:26.258711   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:42:42", ip: ""} in network mk-custom-flannel-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:50:11 +0000 UTC Type:0 Mac:52:54:00:d3:42:42 Iaid: IPaddr:192.168.72.187 Prefix:24 Hostname:custom-flannel-453036 Clientid:01:52:54:00:d3:42:42}
	I0717 01:50:26.258742   78395 main.go:141] libmachine: (custom-flannel-453036) DBG | domain custom-flannel-453036 has defined IP address 192.168.72.187 and MAC address 52:54:00:d3:42:42 in network mk-custom-flannel-453036
	I0717 01:50:26.258930   78395 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:50:26.264098   78395 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:50:26.279156   78395 kubeadm.go:883] updating cluster {Name:custom-flannel-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.2 ClusterName:custom-flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:50:26.279285   78395 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:50:26.279342   78395 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:50:26.311973   78395 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:50:26.312057   78395 ssh_runner.go:195] Run: which lz4
	I0717 01:50:26.316102   78395 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:50:26.320311   78395 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:50:26.320345   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:50:27.764792   78395 crio.go:462] duration metric: took 1.448739323s to copy over tarball
	I0717 01:50:27.764890   78395 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:50:30.191216   78395 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.426297221s)
	I0717 01:50:30.191240   78395 crio.go:469] duration metric: took 2.426416255s to extract the tarball
	I0717 01:50:30.191247   78395 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:50:30.236610   78395 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:50:30.285386   78395 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:50:30.285413   78395 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:50:30.285422   78395 kubeadm.go:934] updating node { 192.168.72.187 8443 v1.30.2 crio true true} ...
	I0717 01:50:30.285558   78395 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-453036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0717 01:50:30.285628   78395 ssh_runner.go:195] Run: crio config
	I0717 01:50:30.337303   78395 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0717 01:50:30.337337   78395 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:50:30.337358   78395 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.187 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-453036 NodeName:custom-flannel-453036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:50:30.337490   78395 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-453036"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:50:30.337542   78395 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:50:30.348439   78395 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:50:30.348496   78395 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:50:30.357952   78395 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0717 01:50:30.376793   78395 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:50:30.393451   78395 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 01:50:30.410631   78395 ssh_runner.go:195] Run: grep 192.168.72.187	control-plane.minikube.internal$ /etc/hosts
	I0717 01:50:30.414944   78395 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:50:30.427420   78395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:50:30.577426   78395 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:50:30.596337   78395 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036 for IP: 192.168.72.187
	I0717 01:50:30.596366   78395 certs.go:194] generating shared ca certs ...
	I0717 01:50:30.596387   78395 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:30.596579   78395 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:50:30.596667   78395 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:50:30.596683   78395 certs.go:256] generating profile certs ...
	I0717 01:50:30.596800   78395 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.key
	I0717 01:50:30.596816   78395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt with IP's: []
	I0717 01:50:30.944169   78395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt ...
	I0717 01:50:30.944196   78395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: {Name:mk3d7da98bc0a9f8c4ef3bc4ea45af1ecf1de9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:30.944366   78395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.key ...
	I0717 01:50:30.944378   78395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.key: {Name:mk956638d09921f44aee738d8b8658ae9af7f43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:30.944449   78395 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.key.586f697c
	I0717 01:50:30.944464   78395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.crt.586f697c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.187]
	I0717 01:50:31.301478   78395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.crt.586f697c ...
	I0717 01:50:31.301503   78395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.crt.586f697c: {Name:mk3f4b247ddb97b4ebf8bfc97cfe860cad2f371e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:31.301652   78395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.key.586f697c ...
	I0717 01:50:31.301663   78395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.key.586f697c: {Name:mke3698b1085067bd815b7b4d3b48b6510c6fc0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:31.301738   78395 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.crt.586f697c -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.crt
	I0717 01:50:31.301837   78395 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.key.586f697c -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.key
	I0717 01:50:31.301892   78395 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/proxy-client.key
	I0717 01:50:31.301917   78395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/proxy-client.crt with IP's: []
	I0717 01:50:31.618470   78395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/proxy-client.crt ...
	I0717 01:50:31.618502   78395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/proxy-client.crt: {Name:mk95a3fec1d6abfab456ef7ac711570abe078973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:31.618662   78395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/proxy-client.key ...
	I0717 01:50:31.618674   78395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/proxy-client.key: {Name:mk393d4a96741319d4114b59df955421c85afe39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:50:31.618835   78395 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:50:31.618873   78395 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:50:31.618883   78395 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:50:31.618904   78395 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:50:31.618926   78395 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:50:31.618948   78395 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:50:31.618981   78395 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:50:31.619492   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:50:31.648734   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:50:31.675380   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:50:31.700837   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:50:31.728934   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:50:31.754404   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:50:31.781571   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:50:31.808716   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:50:31.833574   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:50:31.858975   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:50:31.882660   78395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:50:31.908061   78395 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:50:31.930492   78395 ssh_runner.go:195] Run: openssl version
	I0717 01:50:31.936550   78395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:50:31.947841   78395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:50:31.952450   78395 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:50:31.952501   78395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:50:31.958656   78395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:50:31.970268   78395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:50:31.981988   78395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:50:31.987036   78395 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:50:31.987091   78395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:50:31.993232   78395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:50:32.004758   78395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:50:32.016295   78395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:50:32.021357   78395 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:50:32.021409   78395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:50:32.027223   78395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:50:32.038828   78395 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:50:32.043851   78395 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:50:32.043930   78395 kubeadm.go:392] StartCluster: {Name:custom-flannel-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:custom-flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:50:32.044005   78395 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:50:32.044059   78395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:50:32.084279   78395 cri.go:89] found id: ""
	I0717 01:50:32.084346   78395 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:50:32.096144   78395 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:50:32.109203   78395 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:50:32.133240   78395 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:50:32.133264   78395 kubeadm.go:157] found existing configuration files:
	
	I0717 01:50:32.133317   78395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:50:32.153984   78395 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:50:32.154044   78395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:50:32.167116   78395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:50:32.178489   78395 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:50:32.178560   78395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:50:32.193516   78395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:50:32.202851   78395 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:50:32.202918   78395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:50:32.212700   78395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:50:32.222326   78395 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:50:32.222383   78395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:50:32.231994   78395 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:50:32.294159   78395 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:50:32.294580   78395 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:50:32.431971   78395 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:50:32.432130   78395 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:50:32.432295   78395 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:50:32.654449   78395 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:50:32.697386   78395 out.go:204]   - Generating certificates and keys ...
	I0717 01:50:32.697517   78395 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:50:32.697600   78395 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:50:32.741639   78395 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:50:32.951288   78395 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:50:33.066786   78395 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:50:33.156356   78395 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:50:33.362539   78395 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:50:33.362659   78395 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-453036 localhost] and IPs [192.168.72.187 127.0.0.1 ::1]
	I0717 01:50:33.792963   78395 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:50:33.793150   78395 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-453036 localhost] and IPs [192.168.72.187 127.0.0.1 ::1]
	I0717 01:50:33.961430   78395 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:50:34.112368   78395 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:50:34.268872   78395 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:50:34.269206   78395 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:50:34.460492   78395 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:50:34.651841   78395 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:50:34.818107   78395 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:50:34.934353   78395 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:50:35.133385   78395 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:50:35.134102   78395 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:50:35.136465   78395 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:50:35.139347   78395 out.go:204]   - Booting up control plane ...
	I0717 01:50:35.139471   78395 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:50:35.139581   78395 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:50:35.139675   78395 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:50:35.164474   78395 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:50:35.165469   78395 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:50:35.165533   78395 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:50:35.319165   78395 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:50:35.319253   78395 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:50:36.320888   78395 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002247895s
	I0717 01:50:36.321024   78395 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:50:41.320359   78395 kubeadm.go:310] [api-check] The API server is healthy after 5.00146432s
	I0717 01:50:41.335188   78395 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 01:50:41.359653   78395 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 01:50:41.387585   78395 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 01:50:41.387831   78395 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-453036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 01:50:41.399802   78395 kubeadm.go:310] [bootstrap-token] Using token: gjnmx3.bjdktvh9lxttjqvu
	I0717 01:50:41.401075   78395 out.go:204]   - Configuring RBAC rules ...
	I0717 01:50:41.401230   78395 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 01:50:41.405357   78395 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 01:50:41.414083   78395 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 01:50:41.417638   78395 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 01:50:41.423663   78395 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 01:50:41.427957   78395 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 01:50:41.732243   78395 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 01:50:42.166436   78395 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 01:50:42.730999   78395 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 01:50:42.731029   78395 kubeadm.go:310] 
	I0717 01:50:42.731146   78395 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 01:50:42.731168   78395 kubeadm.go:310] 
	I0717 01:50:42.731290   78395 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 01:50:42.731312   78395 kubeadm.go:310] 
	I0717 01:50:42.731361   78395 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 01:50:42.731443   78395 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 01:50:42.731518   78395 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 01:50:42.731526   78395 kubeadm.go:310] 
	I0717 01:50:42.731594   78395 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 01:50:42.731602   78395 kubeadm.go:310] 
	I0717 01:50:42.731654   78395 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 01:50:42.731663   78395 kubeadm.go:310] 
	I0717 01:50:42.731742   78395 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 01:50:42.731870   78395 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 01:50:42.731970   78395 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 01:50:42.731980   78395 kubeadm.go:310] 
	I0717 01:50:42.732099   78395 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 01:50:42.732197   78395 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 01:50:42.732206   78395 kubeadm.go:310] 
	I0717 01:50:42.732306   78395 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gjnmx3.bjdktvh9lxttjqvu \
	I0717 01:50:42.732435   78395 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 01:50:42.732460   78395 kubeadm.go:310] 	--control-plane 
	I0717 01:50:42.732466   78395 kubeadm.go:310] 
	I0717 01:50:42.732579   78395 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 01:50:42.732589   78395 kubeadm.go:310] 
	I0717 01:50:42.732705   78395 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gjnmx3.bjdktvh9lxttjqvu \
	I0717 01:50:42.732840   78395 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 01:50:42.732977   78395 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:50:42.733180   78395 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0717 01:50:42.735717   78395 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0717 01:50:42.737061   78395 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 01:50:42.737108   78395 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0717 01:50:42.745588   78395 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0717 01:50:42.745627   78395 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0717 01:50:42.775880   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 01:50:43.222230   78395 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:50:43.222302   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:43.222830   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-453036 minikube.k8s.io/updated_at=2024_07_17T01_50_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=custom-flannel-453036 minikube.k8s.io/primary=true
	I0717 01:50:43.294123   78395 ops.go:34] apiserver oom_adj: -16
	I0717 01:50:43.375281   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:43.875602   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:44.375388   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:44.875660   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:45.375851   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:45.875430   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:46.376190   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:46.875331   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:47.376083   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:47.875381   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:48.375307   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:48.875910   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:49.375494   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:49.875610   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:50.375681   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:50:50.875305   78395 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.160235171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181055160148867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e77223d1-f5ca-498c-80dc-a3f67d397a7b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.160893518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=114ad5bf-9d99-4741-981a-5b5c07fd8141 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.160966338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=114ad5bf-9d99-4741-981a-5b5c07fd8141 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.161163044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24,PodSandboxId:ba758410f000d70c91659f1d2bbb68a0e3fe63e64842109b1f69bed7491f180c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038259652069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jbsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a95f33d-19ef-4b2e-a94e-08bbcaff92dc,},Annotations:map[string]string{io.kubernetes.container.hash: f840a0a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266,PodSandboxId:cb3af9dc3f7d686064e05ff60f65b46c1107e638e950de67fb4497b09d89be84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038200001329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mqjqg,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: ca27ce06-d171-4edd-9a1d-11898283f3ac,},Annotations:map[string]string{io.kubernetes.container.hash: f57320d7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979,PodSandboxId:b77504896dcb898c79f9b698b78a00617d8ee411aae6c3e439f2ab34dbca5aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721180038047568193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3352a0de-41db-4537-b87a-24137084aa7a,},Annotations:map[string]string{io.kubernetes.container.hash: f0fc49d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840,PodSandboxId:5382d0a57c5ce3f2ccee4bbc6a2b7a4e819f8153f4a76b6ffafcaa82d659abd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721180036827139635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55xmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6913d5-3362-4a9f-a159-1f9b1da7380a,},Annotations:map[string]string{io.kubernetes.container.hash: 19059592,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3,PodSandboxId:216ab51e933ccf4ccc8a6b0293eb3a238cd3be19d8fad316f5ba92e04752c843,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172118001739921388
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c34385125b125de5400fa3226cf2de,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51,PodSandboxId:93bfd1f14b71596774e7cc218037091329950961f324aab8b0be69ee68389b5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180017395566478,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a514fc142df0ab9cd96e7808cfb29643,},Annotations:map[string]string{io.kubernetes.container.hash: 84b4e281,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7,PodSandboxId:ef3005fd43bf3b843eb81891601a3e181ba6999fd67656e39963f8cf843482cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180017360782785,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 681b4df79913385a7df4408fb39c8722,},Annotations:map[string]string{io.kubernetes.container.hash: f56a7a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4,PodSandboxId:e92d1b4917088b309fb1351143fabcbaa5e6fbd652ccd2da0987ba1ee75e754c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180017304125969,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1b23caea4395fd53bf3e32d9165fe52,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=114ad5bf-9d99-4741-981a-5b5c07fd8141 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.206934471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dda0667-7606-4560-8b39-34df3b0dee40 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.207041389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dda0667-7606-4560-8b39-34df3b0dee40 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.208391766Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b026170-2651-4e21-944a-850f4c227ab6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.209659639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181055209624606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b026170-2651-4e21-944a-850f4c227ab6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.210298641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cb95892-c70b-43a2-90e3-8449c868ee88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.210355531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cb95892-c70b-43a2-90e3-8449c868ee88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.210613227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24,PodSandboxId:ba758410f000d70c91659f1d2bbb68a0e3fe63e64842109b1f69bed7491f180c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038259652069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jbsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a95f33d-19ef-4b2e-a94e-08bbcaff92dc,},Annotations:map[string]string{io.kubernetes.container.hash: f840a0a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266,PodSandboxId:cb3af9dc3f7d686064e05ff60f65b46c1107e638e950de67fb4497b09d89be84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038200001329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mqjqg,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: ca27ce06-d171-4edd-9a1d-11898283f3ac,},Annotations:map[string]string{io.kubernetes.container.hash: f57320d7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979,PodSandboxId:b77504896dcb898c79f9b698b78a00617d8ee411aae6c3e439f2ab34dbca5aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721180038047568193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3352a0de-41db-4537-b87a-24137084aa7a,},Annotations:map[string]string{io.kubernetes.container.hash: f0fc49d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840,PodSandboxId:5382d0a57c5ce3f2ccee4bbc6a2b7a4e819f8153f4a76b6ffafcaa82d659abd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721180036827139635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55xmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6913d5-3362-4a9f-a159-1f9b1da7380a,},Annotations:map[string]string{io.kubernetes.container.hash: 19059592,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3,PodSandboxId:216ab51e933ccf4ccc8a6b0293eb3a238cd3be19d8fad316f5ba92e04752c843,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172118001739921388
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c34385125b125de5400fa3226cf2de,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51,PodSandboxId:93bfd1f14b71596774e7cc218037091329950961f324aab8b0be69ee68389b5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180017395566478,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a514fc142df0ab9cd96e7808cfb29643,},Annotations:map[string]string{io.kubernetes.container.hash: 84b4e281,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7,PodSandboxId:ef3005fd43bf3b843eb81891601a3e181ba6999fd67656e39963f8cf843482cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180017360782785,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 681b4df79913385a7df4408fb39c8722,},Annotations:map[string]string{io.kubernetes.container.hash: f56a7a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4,PodSandboxId:e92d1b4917088b309fb1351143fabcbaa5e6fbd652ccd2da0987ba1ee75e754c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180017304125969,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1b23caea4395fd53bf3e32d9165fe52,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cb95892-c70b-43a2-90e3-8449c868ee88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.256827506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=574fa567-3af6-470b-b396-406ea5b334cb name=/runtime.v1.RuntimeService/Version
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.256968475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=574fa567-3af6-470b-b396-406ea5b334cb name=/runtime.v1.RuntimeService/Version
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.264588638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e27c406-5f72-4265-b33f-558c8dbbefbc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.265272595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181055265148579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e27c406-5f72-4265-b33f-558c8dbbefbc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.266565567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba35233e-561e-427b-913a-0a0a1820a95a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.266990293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba35233e-561e-427b-913a-0a0a1820a95a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.267354386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24,PodSandboxId:ba758410f000d70c91659f1d2bbb68a0e3fe63e64842109b1f69bed7491f180c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038259652069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jbsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a95f33d-19ef-4b2e-a94e-08bbcaff92dc,},Annotations:map[string]string{io.kubernetes.container.hash: f840a0a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266,PodSandboxId:cb3af9dc3f7d686064e05ff60f65b46c1107e638e950de67fb4497b09d89be84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038200001329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mqjqg,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: ca27ce06-d171-4edd-9a1d-11898283f3ac,},Annotations:map[string]string{io.kubernetes.container.hash: f57320d7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979,PodSandboxId:b77504896dcb898c79f9b698b78a00617d8ee411aae6c3e439f2ab34dbca5aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721180038047568193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3352a0de-41db-4537-b87a-24137084aa7a,},Annotations:map[string]string{io.kubernetes.container.hash: f0fc49d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840,PodSandboxId:5382d0a57c5ce3f2ccee4bbc6a2b7a4e819f8153f4a76b6ffafcaa82d659abd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721180036827139635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55xmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6913d5-3362-4a9f-a159-1f9b1da7380a,},Annotations:map[string]string{io.kubernetes.container.hash: 19059592,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3,PodSandboxId:216ab51e933ccf4ccc8a6b0293eb3a238cd3be19d8fad316f5ba92e04752c843,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172118001739921388
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c34385125b125de5400fa3226cf2de,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51,PodSandboxId:93bfd1f14b71596774e7cc218037091329950961f324aab8b0be69ee68389b5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180017395566478,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a514fc142df0ab9cd96e7808cfb29643,},Annotations:map[string]string{io.kubernetes.container.hash: 84b4e281,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7,PodSandboxId:ef3005fd43bf3b843eb81891601a3e181ba6999fd67656e39963f8cf843482cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180017360782785,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 681b4df79913385a7df4408fb39c8722,},Annotations:map[string]string{io.kubernetes.container.hash: f56a7a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4,PodSandboxId:e92d1b4917088b309fb1351143fabcbaa5e6fbd652ccd2da0987ba1ee75e754c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180017304125969,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1b23caea4395fd53bf3e32d9165fe52,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba35233e-561e-427b-913a-0a0a1820a95a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.304978704Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d57e356-6d57-41b8-9358-b10340bff3d3 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.305105390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d57e356-6d57-41b8-9358-b10340bff3d3 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.306931671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c180718-3d25-4ae7-8a0c-591b3beec8b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.307982504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181055307940697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c180718-3d25-4ae7-8a0c-591b3beec8b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.308664826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bc8378d-982a-44a1-b685-2558d7b7acec name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.308741209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bc8378d-982a-44a1-b685-2558d7b7acec name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:50:55 default-k8s-diff-port-945694 crio[713]: time="2024-07-17 01:50:55.309080742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24,PodSandboxId:ba758410f000d70c91659f1d2bbb68a0e3fe63e64842109b1f69bed7491f180c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038259652069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jbsq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a95f33d-19ef-4b2e-a94e-08bbcaff92dc,},Annotations:map[string]string{io.kubernetes.container.hash: f840a0a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266,PodSandboxId:cb3af9dc3f7d686064e05ff60f65b46c1107e638e950de67fb4497b09d89be84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180038200001329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mqjqg,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: ca27ce06-d171-4edd-9a1d-11898283f3ac,},Annotations:map[string]string{io.kubernetes.container.hash: f57320d7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979,PodSandboxId:b77504896dcb898c79f9b698b78a00617d8ee411aae6c3e439f2ab34dbca5aad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1721180038047568193,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3352a0de-41db-4537-b87a-24137084aa7a,},Annotations:map[string]string{io.kubernetes.container.hash: f0fc49d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840,PodSandboxId:5382d0a57c5ce3f2ccee4bbc6a2b7a4e819f8153f4a76b6ffafcaa82d659abd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721180036827139635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55xmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6913d5-3362-4a9f-a159-1f9b1da7380a,},Annotations:map[string]string{io.kubernetes.container.hash: 19059592,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3,PodSandboxId:216ab51e933ccf4ccc8a6b0293eb3a238cd3be19d8fad316f5ba92e04752c843,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172118001739921388
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c34385125b125de5400fa3226cf2de,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51,PodSandboxId:93bfd1f14b71596774e7cc218037091329950961f324aab8b0be69ee68389b5a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180017395566478,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a514fc142df0ab9cd96e7808cfb29643,},Annotations:map[string]string{io.kubernetes.container.hash: 84b4e281,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7,PodSandboxId:ef3005fd43bf3b843eb81891601a3e181ba6999fd67656e39963f8cf843482cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180017360782785,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 681b4df79913385a7df4408fb39c8722,},Annotations:map[string]string{io.kubernetes.container.hash: f56a7a02,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4,PodSandboxId:e92d1b4917088b309fb1351143fabcbaa5e6fbd652ccd2da0987ba1ee75e754c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180017304125969,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-945694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1b23caea4395fd53bf3e32d9165fe52,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bc8378d-982a-44a1-b685-2558d7b7acec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7fef3a9397e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   ba758410f000d       coredns-7db6d8ff4d-jbsq5
	5eed5cd4d1e24       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   cb3af9dc3f7d6       coredns-7db6d8ff4d-mqjqg
	8428dd4b31f40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   b77504896dcb8       storage-provisioner
	bda36ad068bc8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   16 minutes ago      Running             kube-proxy                0                   5382d0a57c5ce       kube-proxy-55xmv
	bd3be8a32004f       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   17 minutes ago      Running             kube-scheduler            2                   216ab51e933cc       kube-scheduler-default-k8s-diff-port-945694
	3d32ff42339e9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 minutes ago      Running             etcd                      2                   93bfd1f14b715       etcd-default-k8s-diff-port-945694
	967ef369f3c41       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   17 minutes ago      Running             kube-apiserver            2                   ef3005fd43bf3       kube-apiserver-default-k8s-diff-port-945694
	fb5d4443945dc       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   17 minutes ago      Running             kube-controller-manager   2                   e92d1b4917088       kube-controller-manager-default-k8s-diff-port-945694
	
	
	==> coredns [5eed5cd4d1e24c7f37fdbb08bab5d2162ad480e8411233234c5c40417775e266] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f7fef3a9397e5e20bb4f8c41fb29412d33aac928f53f2c389c039e8eebd15e24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-945694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-945694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=default-k8s-diff-port-945694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_33_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:33:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-945694
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:50:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:49:21 +0000   Wed, 17 Jul 2024 01:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:49:21 +0000   Wed, 17 Jul 2024 01:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:49:21 +0000   Wed, 17 Jul 2024 01:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:49:21 +0000   Wed, 17 Jul 2024 01:33:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.30
	  Hostname:    default-k8s-diff-port-945694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4fc2ef93f4e4d689fe3de0aecd1906b
	  System UUID:                d4fc2ef9-3f4e-4d68-9fe3-de0aecd1906b
	  Boot ID:                    704973c4-4314-43a4-b18d-29cc02696ddd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jbsq5                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-mqjqg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-945694                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-945694             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-945694    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-55xmv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-945694             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-569cc877fc-4nffv                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node default-k8s-diff-port-945694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node default-k8s-diff-port-945694 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node default-k8s-diff-port-945694 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-945694 event: Registered Node default-k8s-diff-port-945694 in Controller
	
	
	==> dmesg <==
	[  +0.051861] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041147] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.524530] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.322871] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579063] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.029292] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.062216] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072548] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.192016] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.136973] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.310457] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +4.725715] systemd-fstab-generator[797]: Ignoring "noauto" option for root device
	[  +0.063298] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.948258] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[  +5.569174] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.196935] kauditd_printk_skb: 84 callbacks suppressed
	[Jul17 01:33] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.491839] systemd-fstab-generator[3607]: Ignoring "noauto" option for root device
	[  +4.969549] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.583695] systemd-fstab-generator[3932]: Ignoring "noauto" option for root device
	[ +14.378969] systemd-fstab-generator[4156]: Ignoring "noauto" option for root device
	[  +0.015283] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 01:35] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3d32ff42339e93e69d019219c502384c38b3ff263b530b2d5b3dc7b6d7082a51] <==
	{"level":"info","ts":"2024-07-17T01:33:38.319525Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"21545a69824e3d79","local-member-attributes":"{Name:default-k8s-diff-port-945694 ClientURLs:[https://192.168.50.30:2379]}","request-path":"/0/members/21545a69824e3d79/attributes","cluster-id":"4c46e38203538bcd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:33:38.319669Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:33:38.320028Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4c46e38203538bcd","local-member-id":"21545a69824e3d79","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:38.320112Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:38.32015Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:38.320226Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:33:38.320255Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:33:38.320263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:33:38.325794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.30:2379"}
	{"level":"info","ts":"2024-07-17T01:33:38.330253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	2024/07/17 01:33:42 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-17T01:37:59.753586Z","caller":"traceutil/trace.go:171","msg":"trace[1848584702] transaction","detail":"{read_only:false; response_revision:651; number_of_response:1; }","duration":"133.751838ms","start":"2024-07-17T01:37:59.619787Z","end":"2024-07-17T01:37:59.753539Z","steps":["trace[1848584702] 'process raft request'  (duration: 133.437916ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:43:38.403544Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-07-17T01:43:38.414448Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":682,"took":"9.723703ms","hash":847581391,"current-db-size-bytes":2240512,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2240512,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-17T01:43:38.41478Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":847581391,"revision":682,"compact-revision":-1}
	{"level":"warn","ts":"2024-07-17T01:45:00.396587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.807302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:45:00.39681Z","caller":"traceutil/trace.go:171","msg":"trace[1009383848] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:994; }","duration":"310.114149ms","start":"2024-07-17T01:45:00.086646Z","end":"2024-07-17T01:45:00.39676Z","steps":["trace[1009383848] 'range keys from in-memory index tree'  (duration: 309.755566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:45:00.396914Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:45:00.086633Z","time spent":"310.252025ms","remote":"127.0.0.1:39776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-07-17T01:45:46.933497Z","caller":"traceutil/trace.go:171","msg":"trace[185251023] transaction","detail":"{read_only:false; response_revision:1033; number_of_response:1; }","duration":"116.193499ms","start":"2024-07-17T01:45:46.817279Z","end":"2024-07-17T01:45:46.933472Z","steps":["trace[185251023] 'process raft request'  (duration: 116.042386ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:46:32.676501Z","caller":"traceutil/trace.go:171","msg":"trace[870734055] transaction","detail":"{read_only:false; response_revision:1069; number_of_response:1; }","duration":"189.590965ms","start":"2024-07-17T01:46:32.486864Z","end":"2024-07-17T01:46:32.676455Z","steps":["trace[870734055] 'process raft request'  (duration: 189.457629ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:48:38.411672Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-07-17T01:48:38.415983Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":925,"took":"3.906219ms","hash":916447035,"current-db-size-bytes":2240512,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-17T01:48:38.416081Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":916447035,"revision":925,"compact-revision":682}
	{"level":"info","ts":"2024-07-17T01:49:41.813246Z","caller":"traceutil/trace.go:171","msg":"trace[1205423787] transaction","detail":"{read_only:false; response_revision:1222; number_of_response:1; }","duration":"130.743291ms","start":"2024-07-17T01:49:41.682354Z","end":"2024-07-17T01:49:41.813097Z","steps":["trace[1205423787] 'process raft request'  (duration: 62.025414ms)","trace[1205423787] 'compare'  (duration: 68.338464ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T01:50:32.352964Z","caller":"traceutil/trace.go:171","msg":"trace[1224121334] transaction","detail":"{read_only:false; response_revision:1265; number_of_response:1; }","duration":"130.275478ms","start":"2024-07-17T01:50:32.222653Z","end":"2024-07-17T01:50:32.352928Z","steps":["trace[1224121334] 'process raft request'  (duration: 130.17991ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:50:55 up 22 min,  0 users,  load average: 0.25, 0.20, 0.13
	Linux default-k8s-diff-port-945694 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [967ef369f3c4138aefb5f4067e098be3c2958a5b19ca193593f4b7d88586a1a7] <==
	I0717 01:44:41.064136       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:46:41.063928       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:46:41.064279       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:46:41.064326       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:46:41.064415       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:46:41.064493       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:46:41.066232       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:48:40.067285       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:48:40.067582       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 01:48:41.068582       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:48:41.068671       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:48:41.068679       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:48:41.068603       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:48:41.068744       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:48:41.069842       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:49:41.069277       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:49:41.069499       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 01:49:41.069545       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:49:41.070305       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 01:49:41.070350       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 01:49:41.070564       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fb5d4443945dc37f18c20fd962b8d50e36f3aef34ed3cc135225afc3959134c4] <==
	E0717 01:45:25.653591       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:45:26.166305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:45:55.660729       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:45:56.179455       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:46:25.666739       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:46:26.190807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:46:55.673007       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:46:56.199246       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:47:25.680897       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:47:26.208803       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:47:55.687242       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:47:56.217104       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:48:25.692734       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:48:26.226497       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:48:55.698376       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:48:56.234900       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:49:25.703519       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:49:26.243446       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:49:55.709628       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:49:56.254078       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:49:56.625449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="349.567µs"
	I0717 01:50:09.627289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="312.516µs"
	E0717 01:50:25.718039       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 01:50:26.270829       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:50:55.724458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	
	
	==> kube-proxy [bda36ad068bc813ef826f15bb2666b1331230f655433861613fab689e98d0840] <==
	I0717 01:33:57.036034       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:33:57.053473       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.30"]
	I0717 01:33:57.134694       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:33:57.134752       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:33:57.134769       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:33:57.137308       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:33:57.137483       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:33:57.137494       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:33:57.138820       1 config.go:192] "Starting service config controller"
	I0717 01:33:57.138847       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:33:57.138878       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:33:57.138882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:33:57.139641       1 config.go:319] "Starting node config controller"
	I0717 01:33:57.139649       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:33:57.238954       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:33:57.239074       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:33:57.240680       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd3be8a32004f486e3105ab65803f8e2017d04c43501d58ff97a3928b1ae10a3] <==
	W0717 01:33:40.081694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 01:33:40.081940       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 01:33:40.081781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 01:33:40.082007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 01:33:40.081835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:33:40.082066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:33:40.081845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:33:40.082126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:33:40.902644       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:33:40.902673       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:33:41.019221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 01:33:41.019310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 01:33:41.059088       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 01:33:41.059211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 01:33:41.109485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:33:41.109684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:33:41.116801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 01:33:41.116911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 01:33:41.148008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 01:33:41.148096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 01:33:41.182541       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:33:41.182597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 01:33:41.244663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:33:41.244747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 01:33:43.874907       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:48:42 default-k8s-diff-port-945694 kubelet[3939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:48:42 default-k8s-diff-port-945694 kubelet[3939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:48:52 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:48:52.609128    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:49:05 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:05.607077    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:49:19 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:19.607417    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:49:31 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:31.607319    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:49:42 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:42.641237    3939 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:49:42 default-k8s-diff-port-945694 kubelet[3939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:49:42 default-k8s-diff-port-945694 kubelet[3939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:49:42 default-k8s-diff-port-945694 kubelet[3939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:49:42 default-k8s-diff-port-945694 kubelet[3939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:49:44 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:44.625489    3939 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 01:49:44 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:44.625588    3939 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 01:49:44 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:44.625872    3939 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-22nrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-4nffv_kube-system(ba214ec1-a180-42ec-847e-80464e102765): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 01:49:44 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:44.625970    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:49:56 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:49:56.606953    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:50:09 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:50:09.607577    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:50:20 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:50:20.607062    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:50:35 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:50:35.607380    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	Jul 17 01:50:42 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:50:42.650765    3939 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:50:42 default-k8s-diff-port-945694 kubelet[3939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:50:42 default-k8s-diff-port-945694 kubelet[3939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:50:42 default-k8s-diff-port-945694 kubelet[3939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:50:42 default-k8s-diff-port-945694 kubelet[3939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:50:50 default-k8s-diff-port-945694 kubelet[3939]: E0717 01:50:50.606541    3939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4nffv" podUID="ba214ec1-a180-42ec-847e-80464e102765"
	
	
	==> storage-provisioner [8428dd4b31f403265f72aa016c445dee182a5309efa61fabd9e5f80506ea8979] <==
	I0717 01:33:58.290237       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:33:58.306338       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:33:58.306374       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:33:58.323096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:33:58.323942       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-945694_5ebc6471-a584-4320-90d4-35b93d89aaed!
	I0717 01:33:58.349702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e52588b-4b2b-4822-901e-6e471a9db2a8", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-945694_5ebc6471-a584-4320-90d4-35b93d89aaed became leader
	I0717 01:33:58.428326       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-945694_5ebc6471-a584-4320-90d4-35b93d89aaed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-945694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-4nffv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-945694 describe pod metrics-server-569cc877fc-4nffv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-945694 describe pod metrics-server-569cc877fc-4nffv: exit status 1 (88.786568ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-4nffv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-945694 describe pod metrics-server-569cc877fc-4nffv: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (473.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (425.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-818382 -n no-preload-818382
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 01:58:42.450131721 +0000 UTC m=+6853.254277542
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-818382 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-818382 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.799µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-818382 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-818382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-818382 logs -n 25: (1.165049951s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| delete  | -p flannel-453036                                    | flannel-453036 | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo cat                            | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo cat                            | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo cat                            | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo docker                         | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo cat                            | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo cat                            | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo cat                            | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo cat                            | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo                                | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo find                           | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-453036 sudo crio                           | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p bridge-453036                                     | bridge-453036  | jenkins | v1.33.1 | 17 Jul 24 01:53 UTC | 17 Jul 24 01:53 UTC |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:51:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:51:48.450962   82514 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:48.451178   82514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:48.451210   82514 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:48.451231   82514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:48.451486   82514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:51:48.452146   82514 out.go:298] Setting JSON to false
	I0717 01:51:48.453307   82514 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9257,"bootTime":1721171851,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:48.453404   82514 start.go:139] virtualization: kvm guest
	I0717 01:51:48.455793   82514 out.go:177] * [bridge-453036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:48.457352   82514 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:51:48.457386   82514 notify.go:220] Checking for updates...
	I0717 01:51:48.458906   82514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:48.460343   82514 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:51:48.461736   82514 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:51:48.462988   82514 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:48.464263   82514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:48.465968   82514 config.go:182] Loaded profile config "enable-default-cni-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:51:48.466148   82514 config.go:182] Loaded profile config "flannel-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:51:48.466265   82514 config.go:182] Loaded profile config "no-preload-818382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:51:48.466359   82514 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:48.511158   82514 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:51:48.512596   82514 start.go:297] selected driver: kvm2
	I0717 01:51:48.512622   82514 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:51:48.512637   82514 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:48.513642   82514 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:48.513736   82514 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:48.532891   82514 install.go:137] /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:48.532981   82514 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:51:48.533268   82514 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:48.533304   82514 cni.go:84] Creating CNI manager for "bridge"
	I0717 01:51:48.533312   82514 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 01:51:48.533378   82514 start.go:340] cluster config:
	{Name:bridge-453036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:48.533513   82514 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:48.535809   82514 out.go:177] * Starting "bridge-453036" primary control-plane node in "bridge-453036" cluster
	I0717 01:51:48.537184   82514 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:51:48.537265   82514 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:48.537293   82514 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:48.537436   82514 preload.go:172] Found /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:48.537454   82514 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:51:48.537641   82514 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/config.json ...
	I0717 01:51:48.537679   82514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/config.json: {Name:mk767a23667e81d93c8a3733a0028a82e368b8a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:48.537902   82514 start.go:360] acquireMachinesLock for bridge-453036: {Name:mk359f0954ab505b28ed2ad304bec72f6bc026ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:51:48.537968   82514 start.go:364] duration metric: took 44.185µs to acquireMachinesLock for "bridge-453036"
	I0717 01:51:48.537997   82514 start.go:93] Provisioning new machine with config: &{Name:bridge-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:bridge-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:51:48.538100   82514 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:51:47.322989   80566 main.go:141] libmachine: (flannel-453036) Calling .GetIP
	I0717 01:51:47.326189   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:47.326597   80566 main.go:141] libmachine: (flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:b7:c4", ip: ""} in network mk-flannel-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:51:38 +0000 UTC Type:0 Mac:52:54:00:24:b7:c4 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:flannel-453036 Clientid:01:52:54:00:24:b7:c4}
	I0717 01:51:47.326630   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined IP address 192.168.61.173 and MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:51:47.326844   80566 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:51:47.332417   80566 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:51:47.350497   80566 kubeadm.go:883] updating cluster {Name:flannel-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.173 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:51:47.350621   80566 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:51:47.350675   80566 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:51:47.389526   80566 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:51:47.389588   80566 ssh_runner.go:195] Run: which lz4
	I0717 01:51:47.393635   80566 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:51:47.398105   80566 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:51:47.398139   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:51:48.880772   80566 crio.go:462] duration metric: took 1.487157567s to copy over tarball
	I0717 01:51:48.880860   80566 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:51:48.301231   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:48.801264   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:49.301448   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:49.801401   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:50.301070   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:50.801440   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:51.300701   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:51.801560   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:52.301175   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:52.800658   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:48.539862   82514 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 01:51:48.540027   82514 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:51:48.540076   82514 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:48.559681   82514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
	I0717 01:51:48.560066   82514 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:48.560763   82514 main.go:141] libmachine: Using API Version  1
	I0717 01:51:48.560789   82514 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:48.561191   82514 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:48.561404   82514 main.go:141] libmachine: (bridge-453036) Calling .GetMachineName
	I0717 01:51:48.561591   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:51:48.561777   82514 start.go:159] libmachine.API.Create for "bridge-453036" (driver="kvm2")
	I0717 01:51:48.561808   82514 client.go:168] LocalClient.Create starting
	I0717 01:51:48.561843   82514 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem
	I0717 01:51:48.561888   82514 main.go:141] libmachine: Decoding PEM data...
	I0717 01:51:48.561912   82514 main.go:141] libmachine: Parsing certificate...
	I0717 01:51:48.561977   82514 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem
	I0717 01:51:48.562001   82514 main.go:141] libmachine: Decoding PEM data...
	I0717 01:51:48.562022   82514 main.go:141] libmachine: Parsing certificate...
	I0717 01:51:48.562048   82514 main.go:141] libmachine: Running pre-create checks...
	I0717 01:51:48.562066   82514 main.go:141] libmachine: (bridge-453036) Calling .PreCreateCheck
	I0717 01:51:48.562409   82514 main.go:141] libmachine: (bridge-453036) Calling .GetConfigRaw
	I0717 01:51:48.562879   82514 main.go:141] libmachine: Creating machine...
	I0717 01:51:48.562895   82514 main.go:141] libmachine: (bridge-453036) Calling .Create
	I0717 01:51:48.563080   82514 main.go:141] libmachine: (bridge-453036) Creating KVM machine...
	I0717 01:51:48.564592   82514 main.go:141] libmachine: (bridge-453036) DBG | found existing default KVM network
	I0717 01:51:48.565988   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:48.565828   82537 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:26:93} reservation:<nil>}
	I0717 01:51:48.567029   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:48.566952   82537 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:36:8f:94} reservation:<nil>}
	I0717 01:51:48.568013   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:48.567915   82537 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:99:0c:96} reservation:<nil>}
	I0717 01:51:48.569336   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:48.569244   82537 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289ac0}
	I0717 01:51:48.569376   82514 main.go:141] libmachine: (bridge-453036) DBG | created network xml: 
	I0717 01:51:48.569393   82514 main.go:141] libmachine: (bridge-453036) DBG | <network>
	I0717 01:51:48.569406   82514 main.go:141] libmachine: (bridge-453036) DBG |   <name>mk-bridge-453036</name>
	I0717 01:51:48.569416   82514 main.go:141] libmachine: (bridge-453036) DBG |   <dns enable='no'/>
	I0717 01:51:48.569773   82514 main.go:141] libmachine: (bridge-453036) DBG |   
	I0717 01:51:48.569808   82514 main.go:141] libmachine: (bridge-453036) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0717 01:51:48.569831   82514 main.go:141] libmachine: (bridge-453036) DBG |     <dhcp>
	I0717 01:51:48.569849   82514 main.go:141] libmachine: (bridge-453036) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0717 01:51:48.569862   82514 main.go:141] libmachine: (bridge-453036) DBG |     </dhcp>
	I0717 01:51:48.569870   82514 main.go:141] libmachine: (bridge-453036) DBG |   </ip>
	I0717 01:51:48.569876   82514 main.go:141] libmachine: (bridge-453036) DBG |   
	I0717 01:51:48.569884   82514 main.go:141] libmachine: (bridge-453036) DBG | </network>
	I0717 01:51:48.569893   82514 main.go:141] libmachine: (bridge-453036) DBG | 
	I0717 01:51:48.575103   82514 main.go:141] libmachine: (bridge-453036) DBG | trying to create private KVM network mk-bridge-453036 192.168.72.0/24...
	I0717 01:51:48.666721   82514 main.go:141] libmachine: (bridge-453036) DBG | private KVM network mk-bridge-453036 192.168.72.0/24 created
	I0717 01:51:48.666915   82514 main.go:141] libmachine: (bridge-453036) Setting up store path in /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036 ...
	I0717 01:51:48.667022   82514 main.go:141] libmachine: (bridge-453036) Building disk image from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 01:51:48.667172   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:48.667083   82537 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:51:48.667278   82514 main.go:141] libmachine: (bridge-453036) Downloading /home/jenkins/minikube-integration/19265-12897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso...
	I0717 01:51:48.953231   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:48.953085   82537 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa...
	I0717 01:51:49.031372   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:49.031225   82537 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/bridge-453036.rawdisk...
	I0717 01:51:49.031418   82514 main.go:141] libmachine: (bridge-453036) DBG | Writing magic tar header
	I0717 01:51:49.031433   82514 main.go:141] libmachine: (bridge-453036) DBG | Writing SSH key tar header
	I0717 01:51:49.031446   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:49.031334   82537 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036 ...
	I0717 01:51:49.031462   82514 main.go:141] libmachine: (bridge-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036 (perms=drwx------)
	I0717 01:51:49.031482   82514 main.go:141] libmachine: (bridge-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:51:49.031495   82514 main.go:141] libmachine: (bridge-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897/.minikube (perms=drwxr-xr-x)
	I0717 01:51:49.031510   82514 main.go:141] libmachine: (bridge-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036
	I0717 01:51:49.031534   82514 main.go:141] libmachine: (bridge-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube/machines
	I0717 01:51:49.031549   82514 main.go:141] libmachine: (bridge-453036) Setting executable bit set on /home/jenkins/minikube-integration/19265-12897 (perms=drwxrwxr-x)
	I0717 01:51:49.031561   82514 main.go:141] libmachine: (bridge-453036) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:51:49.031573   82514 main.go:141] libmachine: (bridge-453036) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:51:49.031586   82514 main.go:141] libmachine: (bridge-453036) Creating domain...
	I0717 01:51:49.031610   82514 main.go:141] libmachine: (bridge-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:51:49.031635   82514 main.go:141] libmachine: (bridge-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19265-12897
	I0717 01:51:49.031644   82514 main.go:141] libmachine: (bridge-453036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:51:49.031657   82514 main.go:141] libmachine: (bridge-453036) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:51:49.031668   82514 main.go:141] libmachine: (bridge-453036) DBG | Checking permissions on dir: /home
	I0717 01:51:49.031702   82514 main.go:141] libmachine: (bridge-453036) DBG | Skipping /home - not owner
	I0717 01:51:49.032749   82514 main.go:141] libmachine: (bridge-453036) define libvirt domain using xml: 
	I0717 01:51:49.032768   82514 main.go:141] libmachine: (bridge-453036) <domain type='kvm'>
	I0717 01:51:49.032775   82514 main.go:141] libmachine: (bridge-453036)   <name>bridge-453036</name>
	I0717 01:51:49.032780   82514 main.go:141] libmachine: (bridge-453036)   <memory unit='MiB'>3072</memory>
	I0717 01:51:49.032785   82514 main.go:141] libmachine: (bridge-453036)   <vcpu>2</vcpu>
	I0717 01:51:49.032805   82514 main.go:141] libmachine: (bridge-453036)   <features>
	I0717 01:51:49.032818   82514 main.go:141] libmachine: (bridge-453036)     <acpi/>
	I0717 01:51:49.032829   82514 main.go:141] libmachine: (bridge-453036)     <apic/>
	I0717 01:51:49.032840   82514 main.go:141] libmachine: (bridge-453036)     <pae/>
	I0717 01:51:49.032850   82514 main.go:141] libmachine: (bridge-453036)     
	I0717 01:51:49.032858   82514 main.go:141] libmachine: (bridge-453036)   </features>
	I0717 01:51:49.032869   82514 main.go:141] libmachine: (bridge-453036)   <cpu mode='host-passthrough'>
	I0717 01:51:49.032878   82514 main.go:141] libmachine: (bridge-453036)   
	I0717 01:51:49.032886   82514 main.go:141] libmachine: (bridge-453036)   </cpu>
	I0717 01:51:49.032898   82514 main.go:141] libmachine: (bridge-453036)   <os>
	I0717 01:51:49.032912   82514 main.go:141] libmachine: (bridge-453036)     <type>hvm</type>
	I0717 01:51:49.032924   82514 main.go:141] libmachine: (bridge-453036)     <boot dev='cdrom'/>
	I0717 01:51:49.032945   82514 main.go:141] libmachine: (bridge-453036)     <boot dev='hd'/>
	I0717 01:51:49.032954   82514 main.go:141] libmachine: (bridge-453036)     <bootmenu enable='no'/>
	I0717 01:51:49.032960   82514 main.go:141] libmachine: (bridge-453036)   </os>
	I0717 01:51:49.032968   82514 main.go:141] libmachine: (bridge-453036)   <devices>
	I0717 01:51:49.033042   82514 main.go:141] libmachine: (bridge-453036)     <disk type='file' device='cdrom'>
	I0717 01:51:49.033084   82514 main.go:141] libmachine: (bridge-453036)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/boot2docker.iso'/>
	I0717 01:51:49.033107   82514 main.go:141] libmachine: (bridge-453036)       <target dev='hdc' bus='scsi'/>
	I0717 01:51:49.033117   82514 main.go:141] libmachine: (bridge-453036)       <readonly/>
	I0717 01:51:49.033126   82514 main.go:141] libmachine: (bridge-453036)     </disk>
	I0717 01:51:49.033137   82514 main.go:141] libmachine: (bridge-453036)     <disk type='file' device='disk'>
	I0717 01:51:49.033157   82514 main.go:141] libmachine: (bridge-453036)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:51:49.033172   82514 main.go:141] libmachine: (bridge-453036)       <source file='/home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/bridge-453036.rawdisk'/>
	I0717 01:51:49.033183   82514 main.go:141] libmachine: (bridge-453036)       <target dev='hda' bus='virtio'/>
	I0717 01:51:49.033191   82514 main.go:141] libmachine: (bridge-453036)     </disk>
	I0717 01:51:49.033200   82514 main.go:141] libmachine: (bridge-453036)     <interface type='network'>
	I0717 01:51:49.033209   82514 main.go:141] libmachine: (bridge-453036)       <source network='mk-bridge-453036'/>
	I0717 01:51:49.033241   82514 main.go:141] libmachine: (bridge-453036)       <model type='virtio'/>
	I0717 01:51:49.033272   82514 main.go:141] libmachine: (bridge-453036)     </interface>
	I0717 01:51:49.033294   82514 main.go:141] libmachine: (bridge-453036)     <interface type='network'>
	I0717 01:51:49.033313   82514 main.go:141] libmachine: (bridge-453036)       <source network='default'/>
	I0717 01:51:49.033341   82514 main.go:141] libmachine: (bridge-453036)       <model type='virtio'/>
	I0717 01:51:49.033350   82514 main.go:141] libmachine: (bridge-453036)     </interface>
	I0717 01:51:49.033355   82514 main.go:141] libmachine: (bridge-453036)     <serial type='pty'>
	I0717 01:51:49.033363   82514 main.go:141] libmachine: (bridge-453036)       <target port='0'/>
	I0717 01:51:49.033368   82514 main.go:141] libmachine: (bridge-453036)     </serial>
	I0717 01:51:49.033374   82514 main.go:141] libmachine: (bridge-453036)     <console type='pty'>
	I0717 01:51:49.033379   82514 main.go:141] libmachine: (bridge-453036)       <target type='serial' port='0'/>
	I0717 01:51:49.033386   82514 main.go:141] libmachine: (bridge-453036)     </console>
	I0717 01:51:49.033391   82514 main.go:141] libmachine: (bridge-453036)     <rng model='virtio'>
	I0717 01:51:49.033399   82514 main.go:141] libmachine: (bridge-453036)       <backend model='random'>/dev/random</backend>
	I0717 01:51:49.033403   82514 main.go:141] libmachine: (bridge-453036)     </rng>
	I0717 01:51:49.033423   82514 main.go:141] libmachine: (bridge-453036)     
	I0717 01:51:49.033439   82514 main.go:141] libmachine: (bridge-453036)     
	I0717 01:51:49.033449   82514 main.go:141] libmachine: (bridge-453036)   </devices>
	I0717 01:51:49.033459   82514 main.go:141] libmachine: (bridge-453036) </domain>
	I0717 01:51:49.033470   82514 main.go:141] libmachine: (bridge-453036) 
	I0717 01:51:49.038168   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:d8:84:13 in network default
	I0717 01:51:49.038770   82514 main.go:141] libmachine: (bridge-453036) Ensuring networks are active...
	I0717 01:51:49.038794   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:49.039628   82514 main.go:141] libmachine: (bridge-453036) Ensuring network default is active
	I0717 01:51:49.040000   82514 main.go:141] libmachine: (bridge-453036) Ensuring network mk-bridge-453036 is active
	I0717 01:51:49.040649   82514 main.go:141] libmachine: (bridge-453036) Getting domain xml...
	I0717 01:51:49.041434   82514 main.go:141] libmachine: (bridge-453036) Creating domain...
	I0717 01:51:50.429923   82514 main.go:141] libmachine: (bridge-453036) Waiting to get IP...
	I0717 01:51:50.430727   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:50.431250   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:50.431288   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:50.431246   82537 retry.go:31] will retry after 220.985001ms: waiting for machine to come up
	I0717 01:51:50.655323   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:50.655978   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:50.656004   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:50.655935   82537 retry.go:31] will retry after 260.853881ms: waiting for machine to come up
	I0717 01:51:50.918598   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:50.919253   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:50.919307   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:50.919210   82537 retry.go:31] will retry after 386.425282ms: waiting for machine to come up
	I0717 01:51:51.306669   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:51.307344   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:51.307381   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:51.307298   82537 retry.go:31] will retry after 434.617403ms: waiting for machine to come up
	I0717 01:51:51.743979   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:51.744529   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:51.744587   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:51.744503   82537 retry.go:31] will retry after 702.537197ms: waiting for machine to come up
	I0717 01:51:52.448353   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:52.448801   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:52.448872   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:52.448774   82537 retry.go:31] will retry after 809.269268ms: waiting for machine to come up
	I0717 01:51:53.259064   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:53.259636   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:53.259669   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:53.259555   82537 retry.go:31] will retry after 1.066302154s: waiting for machine to come up
	I0717 01:51:51.408780   80566 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.527887141s)
	I0717 01:51:51.408808   80566 crio.go:469] duration metric: took 2.528008284s to extract the tarball
	I0717 01:51:51.408815   80566 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:51:51.454506   80566 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:51:51.508448   80566 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:51:51.508471   80566 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:51:51.508480   80566 kubeadm.go:934] updating node { 192.168.61.173 8443 v1.30.2 crio true true} ...
	I0717 01:51:51.508633   80566 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-453036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0717 01:51:51.508720   80566 ssh_runner.go:195] Run: crio config
	I0717 01:51:51.570197   80566 cni.go:84] Creating CNI manager for "flannel"
	I0717 01:51:51.570221   80566 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:51:51.570247   80566 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.173 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-453036 NodeName:flannel-453036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:51:51.570450   80566 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-453036"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:51:51.570525   80566 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:51:51.582578   80566 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:51:51.582662   80566 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:51:51.596139   80566 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0717 01:51:51.617238   80566 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:51:51.636740   80566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0717 01:51:51.656105   80566 ssh_runner.go:195] Run: grep 192.168.61.173	control-plane.minikube.internal$ /etc/hosts
	I0717 01:51:51.660240   80566 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:51:51.675621   80566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:51:51.822344   80566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:51:51.849220   80566 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036 for IP: 192.168.61.173
	I0717 01:51:51.849244   80566 certs.go:194] generating shared ca certs ...
	I0717 01:51:51.849265   80566 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:51.849457   80566 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:51:51.849513   80566 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:51:51.849527   80566 certs.go:256] generating profile certs ...
	I0717 01:51:51.849617   80566 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.key
	I0717 01:51:51.849635   80566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt with IP's: []
	I0717 01:51:51.919225   80566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt ...
	I0717 01:51:51.919253   80566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: {Name:mkddaa54f17d32a728672aeea50c027842749286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:51.919456   80566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.key ...
	I0717 01:51:51.919471   80566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.key: {Name:mk5a18668106196362b47dd6bb9590c25ffbe4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:51.919574   80566 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.key.8746f2ea
	I0717 01:51:51.919590   80566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.crt.8746f2ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.173]
	I0717 01:51:52.141870   80566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.crt.8746f2ea ...
	I0717 01:51:52.141900   80566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.crt.8746f2ea: {Name:mk4cc73b8587bcf135e5e054c47106d2f06b5a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:52.142094   80566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.key.8746f2ea ...
	I0717 01:51:52.142110   80566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.key.8746f2ea: {Name:mk6126666bdb4851fb0fade6301f20a8b0028a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:52.142208   80566 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.crt.8746f2ea -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.crt
	I0717 01:51:52.142310   80566 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.key.8746f2ea -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.key
	I0717 01:51:52.142400   80566 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/proxy-client.key
	I0717 01:51:52.142419   80566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/proxy-client.crt with IP's: []
	I0717 01:51:52.351161   80566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/proxy-client.crt ...
	I0717 01:51:52.351187   80566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/proxy-client.crt: {Name:mk4219912bbdd07cb1e34d64bfc0a209d3795d83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:52.351369   80566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/proxy-client.key ...
	I0717 01:51:52.351387   80566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/proxy-client.key: {Name:mk1e45df7f0cd37d69d7fc7b07517ccc875a0db2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:52.351590   80566 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:51:52.351635   80566 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:51:52.351649   80566 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:51:52.351683   80566 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:51:52.351710   80566 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:51:52.351741   80566 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:51:52.351792   80566 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:51:52.352368   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:51:52.386175   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:51:52.416585   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:51:52.444723   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:51:52.472473   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 01:51:52.500188   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:51:52.526847   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:51:52.563497   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:51:52.596707   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:51:52.638984   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:51:52.665936   80566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:51:52.695416   80566 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:51:52.714346   80566 ssh_runner.go:195] Run: openssl version
	I0717 01:51:52.720856   80566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:51:52.734441   80566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:51:52.739436   80566 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:51:52.739495   80566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:51:52.746225   80566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:51:52.758488   80566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:51:52.770826   80566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:51:52.775598   80566 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:51:52.775656   80566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:51:52.782024   80566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:51:52.796452   80566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:51:52.811550   80566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:51:52.816630   80566 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:51:52.816683   80566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:51:52.824353   80566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:51:52.840316   80566 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:51:52.845327   80566 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:51:52.845384   80566 kubeadm.go:392] StartCluster: {Name:flannel-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:flannel-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.173 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:52.845478   80566 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:51:52.845537   80566 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:51:52.897274   80566 cri.go:89] found id: ""
	I0717 01:51:52.897347   80566 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:51:52.910254   80566 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:51:52.924032   80566 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:51:52.937331   80566 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:51:52.937352   80566 kubeadm.go:157] found existing configuration files:
	
	I0717 01:51:52.937425   80566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:51:52.949765   80566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:51:52.949839   80566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:51:52.963215   80566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:51:52.974459   80566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:51:52.974549   80566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:51:52.985655   80566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:51:52.996227   80566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:51:52.996293   80566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:51:53.008396   80566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:51:53.019458   80566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:51:53.019518   80566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:51:53.029663   80566 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:51:53.243314   80566 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:51:53.301490   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:53.801219   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:54.301213   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:54.801120   79788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:51:54.993179   79788 kubeadm.go:1113] duration metric: took 12.846027886s to wait for elevateKubeSystemPrivileges
	I0717 01:51:54.993213   79788 kubeadm.go:394] duration metric: took 24.764889111s to StartCluster
	I0717 01:51:54.993234   79788 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:54.993331   79788 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:51:54.994506   79788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:51:54.995291   79788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 01:51:54.995313   79788 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.111 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:51:54.995399   79788 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:51:54.995479   79788 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-453036"
	I0717 01:51:54.995502   79788 config.go:182] Loaded profile config "enable-default-cni-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:51:54.995517   79788 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-453036"
	I0717 01:51:54.995545   79788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-453036"
	I0717 01:51:54.995508   79788 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-453036"
	I0717 01:51:54.995583   79788 host.go:66] Checking if "enable-default-cni-453036" exists ...
	I0717 01:51:54.996022   79788 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:51:54.996060   79788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:54.996022   79788 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:51:54.996196   79788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:54.997303   79788 out.go:177] * Verifying Kubernetes components...
	I0717 01:51:55.000813   79788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:51:55.013277   79788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0717 01:51:55.013801   79788 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:55.014305   79788 main.go:141] libmachine: Using API Version  1
	I0717 01:51:55.014321   79788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:55.014627   79788 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:55.014782   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetState
	I0717 01:51:55.016250   79788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0717 01:51:55.016606   79788 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:55.018402   79788 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-453036"
	I0717 01:51:55.018438   79788 host.go:66] Checking if "enable-default-cni-453036" exists ...
	I0717 01:51:55.018803   79788 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:51:55.018818   79788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:55.019147   79788 main.go:141] libmachine: Using API Version  1
	I0717 01:51:55.019167   79788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:55.019568   79788 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:55.020158   79788 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:51:55.020184   79788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:55.035215   79788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
	I0717 01:51:55.035746   79788 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:55.036353   79788 main.go:141] libmachine: Using API Version  1
	I0717 01:51:55.036373   79788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:55.036803   79788 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:55.037444   79788 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:51:55.037474   79788 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:55.042649   79788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42953
	I0717 01:51:55.043002   79788 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:55.043583   79788 main.go:141] libmachine: Using API Version  1
	I0717 01:51:55.043598   79788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:55.044113   79788 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:55.044332   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetState
	I0717 01:51:55.046137   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:55.050509   79788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:51:55.051815   79788 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:51:55.051831   79788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:51:55.051852   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:55.054406   79788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0717 01:51:55.054896   79788 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:55.055286   79788 main.go:141] libmachine: Using API Version  1
	I0717 01:51:55.055299   79788 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:55.055358   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:55.055651   79788 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:55.055997   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:55.056006   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetState
	I0717 01:51:55.056043   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:55.056222   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:55.056413   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:55.056609   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:55.056750   79788 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/enable-default-cni-453036/id_rsa Username:docker}
	I0717 01:51:55.057713   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .DriverName
	I0717 01:51:55.057909   79788 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:51:55.057923   79788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:51:55.057951   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHHostname
	I0717 01:51:55.060705   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:55.061233   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:94:be", ip: ""} in network mk-enable-default-cni-453036: {Iface:virbr2 ExpiryTime:2024-07-17 02:51:13 +0000 UTC Type:0 Mac:52:54:00:09:94:be Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:enable-default-cni-453036 Clientid:01:52:54:00:09:94:be}
	I0717 01:51:55.061256   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | domain enable-default-cni-453036 has defined IP address 192.168.50.111 and MAC address 52:54:00:09:94:be in network mk-enable-default-cni-453036
	I0717 01:51:55.061452   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHPort
	I0717 01:51:55.061620   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHKeyPath
	I0717 01:51:55.061769   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .GetSSHUsername
	I0717 01:51:55.061905   79788 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/enable-default-cni-453036/id_rsa Username:docker}
	I0717 01:51:55.215915   79788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 01:51:55.237865   79788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:51:55.432700   79788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:51:55.520821   79788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:51:55.969829   79788 main.go:141] libmachine: Making call to close driver server
	I0717 01:51:55.969853   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .Close
	I0717 01:51:55.970140   79788 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:51:55.970158   79788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:51:55.970180   79788 main.go:141] libmachine: Making call to close driver server
	I0717 01:51:55.970188   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .Close
	I0717 01:51:55.969740   79788 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 01:51:55.971150   79788 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-453036" to be "Ready" ...
	I0717 01:51:55.971459   79788 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:51:55.971475   79788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:51:55.987946   79788 node_ready.go:49] node "enable-default-cni-453036" has status "Ready":"True"
	I0717 01:51:55.987970   79788 node_ready.go:38] duration metric: took 16.783756ms for node "enable-default-cni-453036" to be "Ready" ...
	I0717 01:51:55.987986   79788 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:51:56.008420   79788 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace to be "Ready" ...
	I0717 01:51:56.010549   79788 main.go:141] libmachine: Making call to close driver server
	I0717 01:51:56.010567   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .Close
	I0717 01:51:56.010857   79788 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:51:56.010878   79788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:51:56.477971   79788 kapi.go:248] "coredns" deployment in "kube-system" namespace and "enable-default-cni-453036" context rescaled to 1 replicas
	I0717 01:51:56.575766   79788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.054904779s)
	I0717 01:51:56.575819   79788 main.go:141] libmachine: Making call to close driver server
	I0717 01:51:56.575832   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .Close
	I0717 01:51:56.576158   79788 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:51:56.576179   79788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:51:56.576188   79788 main.go:141] libmachine: Making call to close driver server
	I0717 01:51:56.576197   79788 main.go:141] libmachine: (enable-default-cni-453036) Calling .Close
	I0717 01:51:56.578162   79788 main.go:141] libmachine: (enable-default-cni-453036) DBG | Closing plugin on server side
	I0717 01:51:56.578237   79788 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:51:56.578257   79788 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:51:56.580581   79788 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 01:51:56.582513   79788 addons.go:510] duration metric: took 1.587108526s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0717 01:51:54.327216   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:54.327749   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:54.327778   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:54.327707   82537 retry.go:31] will retry after 1.062701614s: waiting for machine to come up
	I0717 01:51:55.392028   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:55.392642   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:55.392676   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:55.392598   82537 retry.go:31] will retry after 1.389259805s: waiting for machine to come up
	I0717 01:51:56.783970   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:56.784532   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:56.784591   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:56.784487   82537 retry.go:31] will retry after 1.707854489s: waiting for machine to come up
	I0717 01:51:58.016455   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:00.016544   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:02.516005   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:51:58.493977   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:51:58.494505   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:51:58.494544   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:51:58.494479   82537 retry.go:31] will retry after 2.142287355s: waiting for machine to come up
	I0717 01:52:00.638676   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:00.639144   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:52:00.639179   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:52:00.639104   82537 retry.go:31] will retry after 3.524652075s: waiting for machine to come up
	I0717 01:52:04.517830   80566 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:52:04.517892   80566 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:52:04.518010   80566 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:52:04.518155   80566 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:52:04.518284   80566 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:52:04.518366   80566 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:52:04.519881   80566 out.go:204]   - Generating certificates and keys ...
	I0717 01:52:04.519976   80566 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:52:04.520053   80566 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:52:04.520123   80566 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:52:04.520216   80566 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:52:04.520283   80566 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:52:04.520360   80566 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:52:04.520429   80566 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:52:04.520601   80566 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-453036 localhost] and IPs [192.168.61.173 127.0.0.1 ::1]
	I0717 01:52:04.520670   80566 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:52:04.520833   80566 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-453036 localhost] and IPs [192.168.61.173 127.0.0.1 ::1]
	I0717 01:52:04.520931   80566 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:52:04.521036   80566 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:52:04.521100   80566 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:52:04.521147   80566 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:52:04.521214   80566 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:52:04.521306   80566 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:52:04.521390   80566 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:52:04.521493   80566 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:52:04.521584   80566 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:52:04.521794   80566 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:52:04.521908   80566 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:52:04.523307   80566 out.go:204]   - Booting up control plane ...
	I0717 01:52:04.523424   80566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:52:04.523517   80566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:52:04.523608   80566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:52:04.523754   80566 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:52:04.523857   80566 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:52:04.523895   80566 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:52:04.524047   80566 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:52:04.524256   80566 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:52:04.524350   80566 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.776125ms
	I0717 01:52:04.524445   80566 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:52:04.524525   80566 kubeadm.go:310] [api-check] The API server is healthy after 5.50521803s
	I0717 01:52:04.524682   80566 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 01:52:04.524803   80566 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 01:52:04.524854   80566 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 01:52:04.525060   80566 kubeadm.go:310] [mark-control-plane] Marking the node flannel-453036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 01:52:04.525114   80566 kubeadm.go:310] [bootstrap-token] Using token: o1juis.xqkk77d1c8gm0wji
	I0717 01:52:04.527096   80566 out.go:204]   - Configuring RBAC rules ...
	I0717 01:52:04.527229   80566 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 01:52:04.527310   80566 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 01:52:04.527451   80566 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 01:52:04.527614   80566 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 01:52:04.527751   80566 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 01:52:04.527857   80566 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 01:52:04.527965   80566 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 01:52:04.528004   80566 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 01:52:04.528043   80566 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 01:52:04.528048   80566 kubeadm.go:310] 
	I0717 01:52:04.528102   80566 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 01:52:04.528109   80566 kubeadm.go:310] 
	I0717 01:52:04.528184   80566 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 01:52:04.528194   80566 kubeadm.go:310] 
	I0717 01:52:04.528231   80566 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 01:52:04.528281   80566 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 01:52:04.528323   80566 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 01:52:04.528329   80566 kubeadm.go:310] 
	I0717 01:52:04.528379   80566 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 01:52:04.528385   80566 kubeadm.go:310] 
	I0717 01:52:04.528423   80566 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 01:52:04.528428   80566 kubeadm.go:310] 
	I0717 01:52:04.528472   80566 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 01:52:04.528534   80566 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 01:52:04.528617   80566 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 01:52:04.528625   80566 kubeadm.go:310] 
	I0717 01:52:04.528691   80566 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 01:52:04.528755   80566 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 01:52:04.528763   80566 kubeadm.go:310] 
	I0717 01:52:04.528833   80566 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o1juis.xqkk77d1c8gm0wji \
	I0717 01:52:04.528926   80566 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 01:52:04.528950   80566 kubeadm.go:310] 	--control-plane 
	I0717 01:52:04.528956   80566 kubeadm.go:310] 
	I0717 01:52:04.529024   80566 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 01:52:04.529034   80566 kubeadm.go:310] 
	I0717 01:52:04.529105   80566 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o1juis.xqkk77d1c8gm0wji \
	I0717 01:52:04.529195   80566 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 01:52:04.529212   80566 cni.go:84] Creating CNI manager for "flannel"
	I0717 01:52:04.530475   80566 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0717 01:52:05.015337   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:07.515771   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:04.165618   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:04.166246   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:52:04.166262   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:52:04.166213   82537 retry.go:31] will retry after 3.712105295s: waiting for machine to come up
	I0717 01:52:07.881758   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:07.882301   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find current IP address of domain bridge-453036 in network mk-bridge-453036
	I0717 01:52:07.882357   82514 main.go:141] libmachine: (bridge-453036) DBG | I0717 01:52:07.882271   82537 retry.go:31] will retry after 3.821495629s: waiting for machine to come up
	I0717 01:52:04.531491   80566 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 01:52:04.538750   80566 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 01:52:04.538769   80566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0717 01:52:04.568191   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 01:52:04.961099   80566 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:52:04.961199   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:04.961228   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-453036 minikube.k8s.io/updated_at=2024_07_17T01_52_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=flannel-453036 minikube.k8s.io/primary=true
	I0717 01:52:05.166892   80566 ops.go:34] apiserver oom_adj: -16
	I0717 01:52:05.167096   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:05.667156   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:06.167725   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:06.668131   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:07.167757   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:07.667501   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:08.167252   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:08.667964   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:10.015027   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:12.515681   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:11.705766   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:11.706282   82514 main.go:141] libmachine: (bridge-453036) Found IP for machine: 192.168.72.138
	I0717 01:52:11.706306   82514 main.go:141] libmachine: (bridge-453036) Reserving static IP address...
	I0717 01:52:11.706319   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has current primary IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:11.706718   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find host DHCP lease matching {name: "bridge-453036", mac: "52:54:00:2d:dd:f6", ip: "192.168.72.138"} in network mk-bridge-453036
	I0717 01:52:11.783527   82514 main.go:141] libmachine: (bridge-453036) DBG | Getting to WaitForSSH function...
	I0717 01:52:11.783550   82514 main.go:141] libmachine: (bridge-453036) Reserved static IP address: 192.168.72.138
	I0717 01:52:11.783561   82514 main.go:141] libmachine: (bridge-453036) Waiting for SSH to be available...
	I0717 01:52:11.786294   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:11.786545   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036
	I0717 01:52:11.786579   82514 main.go:141] libmachine: (bridge-453036) DBG | unable to find defined IP address of network mk-bridge-453036 interface with MAC address 52:54:00:2d:dd:f6
	I0717 01:52:11.786728   82514 main.go:141] libmachine: (bridge-453036) DBG | Using SSH client type: external
	I0717 01:52:11.786756   82514 main.go:141] libmachine: (bridge-453036) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa (-rw-------)
	I0717 01:52:11.786781   82514 main.go:141] libmachine: (bridge-453036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:52:11.786797   82514 main.go:141] libmachine: (bridge-453036) DBG | About to run SSH command:
	I0717 01:52:11.786812   82514 main.go:141] libmachine: (bridge-453036) DBG | exit 0
	I0717 01:52:11.790270   82514 main.go:141] libmachine: (bridge-453036) DBG | SSH cmd err, output: exit status 255: 
	I0717 01:52:11.790293   82514 main.go:141] libmachine: (bridge-453036) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 01:52:11.790304   82514 main.go:141] libmachine: (bridge-453036) DBG | command : exit 0
	I0717 01:52:11.790318   82514 main.go:141] libmachine: (bridge-453036) DBG | err     : exit status 255
	I0717 01:52:11.790332   82514 main.go:141] libmachine: (bridge-453036) DBG | output  : 
	I0717 01:52:09.167227   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:09.667329   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:10.167292   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:10.668125   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:11.167508   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:11.667673   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:12.168029   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:12.667222   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:13.167207   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:13.668064   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:14.167905   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:14.668258   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:15.167514   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:15.667787   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:16.167699   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:16.667989   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:17.167730   80566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:17.338052   80566 kubeadm.go:1113] duration metric: took 12.376917935s to wait for elevateKubeSystemPrivileges
	I0717 01:52:17.338088   80566 kubeadm.go:394] duration metric: took 24.492709604s to StartCluster
	I0717 01:52:17.338107   80566 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:17.338192   80566 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:52:17.340006   80566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:17.340259   80566 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.173 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:52:17.340318   80566 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 01:52:17.340416   80566 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:52:17.340484   80566 addons.go:69] Setting storage-provisioner=true in profile "flannel-453036"
	I0717 01:52:17.340494   80566 config.go:182] Loaded profile config "flannel-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:52:17.340522   80566 addons.go:234] Setting addon storage-provisioner=true in "flannel-453036"
	I0717 01:52:17.340534   80566 addons.go:69] Setting default-storageclass=true in profile "flannel-453036"
	I0717 01:52:17.340577   80566 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-453036"
	I0717 01:52:17.340579   80566 host.go:66] Checking if "flannel-453036" exists ...
	I0717 01:52:17.341010   80566 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:17.341032   80566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:17.341039   80566 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:17.341060   80566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:17.342003   80566 out.go:177] * Verifying Kubernetes components...
	I0717 01:52:17.343291   80566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:52:17.358082   80566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I0717 01:52:17.358783   80566 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:17.359471   80566 main.go:141] libmachine: Using API Version  1
	I0717 01:52:17.359492   80566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:17.360011   80566 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:17.360136   80566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0717 01:52:17.360341   80566 main.go:141] libmachine: (flannel-453036) Calling .GetState
	I0717 01:52:17.360549   80566 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:17.361853   80566 main.go:141] libmachine: Using API Version  1
	I0717 01:52:17.361870   80566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:17.362312   80566 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:17.364307   80566 addons.go:234] Setting addon default-storageclass=true in "flannel-453036"
	I0717 01:52:17.364356   80566 host.go:66] Checking if "flannel-453036" exists ...
	I0717 01:52:17.364790   80566 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:17.364825   80566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:17.365610   80566 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:17.365758   80566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:17.381568   80566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41331
	I0717 01:52:17.382069   80566 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:17.382654   80566 main.go:141] libmachine: Using API Version  1
	I0717 01:52:17.382686   80566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:17.383077   80566 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:17.383741   80566 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:17.383777   80566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:17.387009   80566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0717 01:52:17.387660   80566 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:17.388321   80566 main.go:141] libmachine: Using API Version  1
	I0717 01:52:17.388342   80566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:17.388946   80566 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:17.389345   80566 main.go:141] libmachine: (flannel-453036) Calling .GetState
	I0717 01:52:17.391474   80566 main.go:141] libmachine: (flannel-453036) Calling .DriverName
	I0717 01:52:17.393341   80566 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:52:14.791019   82514 main.go:141] libmachine: (bridge-453036) DBG | Getting to WaitForSSH function...
	I0717 01:52:14.793663   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:14.794018   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:14.794051   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:14.794190   82514 main.go:141] libmachine: (bridge-453036) DBG | Using SSH client type: external
	I0717 01:52:14.794223   82514 main.go:141] libmachine: (bridge-453036) DBG | Using SSH private key: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa (-rw-------)
	I0717 01:52:14.794250   82514 main.go:141] libmachine: (bridge-453036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:52:14.794261   82514 main.go:141] libmachine: (bridge-453036) DBG | About to run SSH command:
	I0717 01:52:14.794276   82514 main.go:141] libmachine: (bridge-453036) DBG | exit 0
	I0717 01:52:14.920726   82514 main.go:141] libmachine: (bridge-453036) DBG | SSH cmd err, output: <nil>: 
	I0717 01:52:14.921086   82514 main.go:141] libmachine: (bridge-453036) KVM machine creation complete!
	I0717 01:52:14.921465   82514 main.go:141] libmachine: (bridge-453036) Calling .GetConfigRaw
	I0717 01:52:14.922037   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:14.922269   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:14.922427   82514 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:52:14.922442   82514 main.go:141] libmachine: (bridge-453036) Calling .GetState
	I0717 01:52:14.923637   82514 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:52:14.923653   82514 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:52:14.923661   82514 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:52:14.923669   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:14.926176   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:14.926533   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:14.926558   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:14.926694   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:14.926845   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:14.927016   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:14.927123   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:14.927291   82514 main.go:141] libmachine: Using SSH client type: native
	I0717 01:52:14.927528   82514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I0717 01:52:14.927545   82514 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:52:15.031867   82514 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:52:15.031887   82514 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:52:15.031895   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:15.034954   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.035380   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.035408   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.035528   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:15.035714   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.035894   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.036030   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:15.036234   82514 main.go:141] libmachine: Using SSH client type: native
	I0717 01:52:15.036399   82514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I0717 01:52:15.036410   82514 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:52:15.145308   82514 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:52:15.145394   82514 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:52:15.145410   82514 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:52:15.145422   82514 main.go:141] libmachine: (bridge-453036) Calling .GetMachineName
	I0717 01:52:15.145700   82514 buildroot.go:166] provisioning hostname "bridge-453036"
	I0717 01:52:15.145732   82514 main.go:141] libmachine: (bridge-453036) Calling .GetMachineName
	I0717 01:52:15.145971   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:15.148326   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.148690   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.148715   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.148870   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:15.149062   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.149218   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.149379   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:15.149550   82514 main.go:141] libmachine: Using SSH client type: native
	I0717 01:52:15.149760   82514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I0717 01:52:15.149778   82514 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-453036 && echo "bridge-453036" | sudo tee /etc/hostname
	I0717 01:52:15.272390   82514 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-453036
	
	I0717 01:52:15.272437   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:15.275288   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.275641   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.275674   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.275826   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:15.276036   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.276223   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.276388   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:15.276591   82514 main.go:141] libmachine: Using SSH client type: native
	I0717 01:52:15.276792   82514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I0717 01:52:15.276811   82514 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-453036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-453036/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-453036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:52:15.394309   82514 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:52:15.394335   82514 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12897/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12897/.minikube}
	I0717 01:52:15.394375   82514 buildroot.go:174] setting up certificates
	I0717 01:52:15.394382   82514 provision.go:84] configureAuth start
	I0717 01:52:15.394392   82514 main.go:141] libmachine: (bridge-453036) Calling .GetMachineName
	I0717 01:52:15.394722   82514 main.go:141] libmachine: (bridge-453036) Calling .GetIP
	I0717 01:52:15.397768   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.398090   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.398115   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.398268   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:15.400748   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.401049   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.401067   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.401212   82514 provision.go:143] copyHostCerts
	I0717 01:52:15.401298   82514 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem, removing ...
	I0717 01:52:15.401311   82514 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem
	I0717 01:52:15.401377   82514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/key.pem (1675 bytes)
	I0717 01:52:15.401511   82514 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem, removing ...
	I0717 01:52:15.401525   82514 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem
	I0717 01:52:15.401559   82514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/ca.pem (1082 bytes)
	I0717 01:52:15.401653   82514 exec_runner.go:144] found /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem, removing ...
	I0717 01:52:15.401664   82514 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem
	I0717 01:52:15.401690   82514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12897/.minikube/cert.pem (1123 bytes)
	I0717 01:52:15.401772   82514 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem org=jenkins.bridge-453036 san=[127.0.0.1 192.168.72.138 bridge-453036 localhost minikube]
	I0717 01:52:15.480078   82514 provision.go:177] copyRemoteCerts
	I0717 01:52:15.480130   82514 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:52:15.480152   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:15.482721   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.483013   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.483039   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.483214   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:15.483411   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.483557   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:15.483668   82514 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa Username:docker}
	I0717 01:52:15.567329   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 01:52:15.594705   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 01:52:15.620586   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:52:15.647896   82514 provision.go:87] duration metric: took 253.503486ms to configureAuth
	I0717 01:52:15.647923   82514 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:52:15.648114   82514 config.go:182] Loaded profile config "bridge-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:52:15.648198   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:15.650988   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.651334   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.651360   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.651602   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:15.651795   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.651991   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.652167   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:15.652365   82514 main.go:141] libmachine: Using SSH client type: native
	I0717 01:52:15.652589   82514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I0717 01:52:15.652615   82514 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:52:15.924593   82514 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:52:15.924622   82514 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:52:15.924632   82514 main.go:141] libmachine: (bridge-453036) Calling .GetURL
	I0717 01:52:15.926025   82514 main.go:141] libmachine: (bridge-453036) DBG | Using libvirt version 6000000
	I0717 01:52:15.928671   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.929056   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.929112   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.929275   82514 main.go:141] libmachine: Docker is up and running!
	I0717 01:52:15.929292   82514 main.go:141] libmachine: Reticulating splines...
	I0717 01:52:15.929300   82514 client.go:171] duration metric: took 27.367483386s to LocalClient.Create
	I0717 01:52:15.929319   82514 start.go:167] duration metric: took 27.36754356s to libmachine.API.Create "bridge-453036"
	I0717 01:52:15.929331   82514 start.go:293] postStartSetup for "bridge-453036" (driver="kvm2")
	I0717 01:52:15.929342   82514 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:52:15.929368   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:15.929608   82514 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:52:15.929629   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:15.931756   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.932048   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:15.932075   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:15.932222   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:15.932422   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:15.932613   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:15.932769   82514 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa Username:docker}
	I0717 01:52:16.016341   82514 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:52:16.021288   82514 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:52:16.021307   82514 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/addons for local assets ...
	I0717 01:52:16.021370   82514 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12897/.minikube/files for local assets ...
	I0717 01:52:16.021470   82514 filesync.go:149] local asset: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem -> 200682.pem in /etc/ssl/certs
	I0717 01:52:16.021576   82514 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:52:16.032076   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:52:16.057208   82514 start.go:296] duration metric: took 127.865521ms for postStartSetup
	I0717 01:52:16.057251   82514 main.go:141] libmachine: (bridge-453036) Calling .GetConfigRaw
	I0717 01:52:16.057810   82514 main.go:141] libmachine: (bridge-453036) Calling .GetIP
	I0717 01:52:16.060382   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.060750   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:16.060779   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.061042   82514 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/config.json ...
	I0717 01:52:16.061251   82514 start.go:128] duration metric: took 27.523138604s to createHost
	I0717 01:52:16.061274   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:16.063659   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.063938   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:16.063965   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.064110   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:16.064271   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:16.064438   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:16.064572   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:16.064737   82514 main.go:141] libmachine: Using SSH client type: native
	I0717 01:52:16.064890   82514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I0717 01:52:16.064903   82514 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:52:16.173873   82514 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181136.150738969
	
	I0717 01:52:16.173899   82514 fix.go:216] guest clock: 1721181136.150738969
	I0717 01:52:16.173909   82514 fix.go:229] Guest: 2024-07-17 01:52:16.150738969 +0000 UTC Remote: 2024-07-17 01:52:16.061261773 +0000 UTC m=+27.661273634 (delta=89.477196ms)
	I0717 01:52:16.173932   82514 fix.go:200] guest clock delta is within tolerance: 89.477196ms
	I0717 01:52:16.173942   82514 start.go:83] releasing machines lock for "bridge-453036", held for 27.635960328s
	I0717 01:52:16.173969   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:16.174251   82514 main.go:141] libmachine: (bridge-453036) Calling .GetIP
	I0717 01:52:16.177352   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.177745   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:16.177770   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.177916   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:16.178509   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:16.178717   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:16.178784   82514 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:52:16.178839   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:16.178964   82514 ssh_runner.go:195] Run: cat /version.json
	I0717 01:52:16.178988   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:16.181861   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.182051   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.182265   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:16.182295   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.182481   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:16.182580   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:16.182615   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:16.182629   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:16.182776   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:16.182827   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:16.182939   82514 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa Username:docker}
	I0717 01:52:16.182962   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:16.183076   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:16.183165   82514 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa Username:docker}
	I0717 01:52:16.271024   82514 ssh_runner.go:195] Run: systemctl --version
	I0717 01:52:16.292853   82514 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:52:16.460604   82514 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:52:16.467101   82514 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:52:16.467186   82514 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:52:16.482429   82514 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:52:16.482453   82514 start.go:495] detecting cgroup driver to use...
	I0717 01:52:16.482513   82514 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:52:16.500284   82514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:52:16.516208   82514 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:52:16.516281   82514 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:52:16.532324   82514 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:52:16.551671   82514 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:52:16.691394   82514 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:52:16.848396   82514 docker.go:233] disabling docker service ...
	I0717 01:52:16.848468   82514 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:52:16.867662   82514 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:52:16.882353   82514 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:52:17.043261   82514 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:52:17.183739   82514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:52:17.200271   82514 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:52:17.224016   82514 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:52:17.224093   82514 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:52:17.238388   82514 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:52:17.238462   82514 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:52:17.251062   82514 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:52:17.262264   82514 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:52:17.275885   82514 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:52:17.289592   82514 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:52:17.301309   82514 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:52:17.323628   82514 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:52:17.336501   82514 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:52:17.350006   82514 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:52:17.350056   82514 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:52:17.372855   82514 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:52:17.386875   82514 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:52:17.526675   82514 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:52:17.681959   82514 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:52:17.682033   82514 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:52:17.687329   82514 start.go:563] Will wait 60s for crictl version
	I0717 01:52:17.687398   82514 ssh_runner.go:195] Run: which crictl
	I0717 01:52:17.691965   82514 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:52:17.741838   82514 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:52:17.741915   82514 ssh_runner.go:195] Run: crio --version
	I0717 01:52:17.773383   82514 ssh_runner.go:195] Run: crio --version
	I0717 01:52:17.810272   82514 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:52:15.015496   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:17.516756   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:17.811467   82514 main.go:141] libmachine: (bridge-453036) Calling .GetIP
	I0717 01:52:17.814196   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:17.814598   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:17.814629   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:17.814840   82514 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:52:17.819019   82514 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:52:17.832926   82514 kubeadm.go:883] updating cluster {Name:bridge-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:bridge-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:52:17.833042   82514 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:52:17.833105   82514 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:52:17.871626   82514 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:52:17.871720   82514 ssh_runner.go:195] Run: which lz4
	I0717 01:52:17.876167   82514 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:52:17.880234   82514 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:52:17.880262   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:52:17.394819   80566 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:52:17.394838   80566 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:52:17.394857   80566 main.go:141] libmachine: (flannel-453036) Calling .GetSSHHostname
	I0717 01:52:17.398598   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:52:17.399323   80566 main.go:141] libmachine: (flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:b7:c4", ip: ""} in network mk-flannel-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:51:38 +0000 UTC Type:0 Mac:52:54:00:24:b7:c4 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:flannel-453036 Clientid:01:52:54:00:24:b7:c4}
	I0717 01:52:17.399344   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined IP address 192.168.61.173 and MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:52:17.399544   80566 main.go:141] libmachine: (flannel-453036) Calling .GetSSHPort
	I0717 01:52:17.399729   80566 main.go:141] libmachine: (flannel-453036) Calling .GetSSHKeyPath
	I0717 01:52:17.399922   80566 main.go:141] libmachine: (flannel-453036) Calling .GetSSHUsername
	I0717 01:52:17.400094   80566 sshutil.go:53] new ssh client: &{IP:192.168.61.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036/id_rsa Username:docker}
	I0717 01:52:17.400420   80566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0717 01:52:17.400800   80566 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:17.401305   80566 main.go:141] libmachine: Using API Version  1
	I0717 01:52:17.401324   80566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:17.401778   80566 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:17.401976   80566 main.go:141] libmachine: (flannel-453036) Calling .GetState
	I0717 01:52:17.403538   80566 main.go:141] libmachine: (flannel-453036) Calling .DriverName
	I0717 01:52:17.403730   80566 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:52:17.403750   80566 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:52:17.403768   80566 main.go:141] libmachine: (flannel-453036) Calling .GetSSHHostname
	I0717 01:52:17.407198   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:52:17.407580   80566 main.go:141] libmachine: (flannel-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:b7:c4", ip: ""} in network mk-flannel-453036: {Iface:virbr3 ExpiryTime:2024-07-17 02:51:38 +0000 UTC Type:0 Mac:52:54:00:24:b7:c4 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:flannel-453036 Clientid:01:52:54:00:24:b7:c4}
	I0717 01:52:17.407602   80566 main.go:141] libmachine: (flannel-453036) DBG | domain flannel-453036 has defined IP address 192.168.61.173 and MAC address 52:54:00:24:b7:c4 in network mk-flannel-453036
	I0717 01:52:17.407790   80566 main.go:141] libmachine: (flannel-453036) Calling .GetSSHPort
	I0717 01:52:17.407970   80566 main.go:141] libmachine: (flannel-453036) Calling .GetSSHKeyPath
	I0717 01:52:17.408120   80566 main.go:141] libmachine: (flannel-453036) Calling .GetSSHUsername
	I0717 01:52:17.408230   80566 sshutil.go:53] new ssh client: &{IP:192.168.61.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/flannel-453036/id_rsa Username:docker}
	I0717 01:52:17.696449   80566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:52:17.696710   80566 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 01:52:17.857646   80566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:52:17.873428   80566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:52:18.403042   80566 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 01:52:18.404418   80566 node_ready.go:35] waiting up to 15m0s for node "flannel-453036" to be "Ready" ...
	I0717 01:52:18.917093   80566 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-453036" context rescaled to 1 replicas
	I0717 01:52:19.173864   80566 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.316172647s)
	I0717 01:52:19.173893   80566 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.30043749s)
	I0717 01:52:19.173917   80566 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:19.173933   80566 main.go:141] libmachine: (flannel-453036) Calling .Close
	I0717 01:52:19.173917   80566 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:19.173996   80566 main.go:141] libmachine: (flannel-453036) Calling .Close
	I0717 01:52:19.174246   80566 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:19.174268   80566 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:19.174279   80566 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:19.174288   80566 main.go:141] libmachine: (flannel-453036) Calling .Close
	I0717 01:52:19.176678   80566 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:19.176685   80566 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:19.176690   80566 main.go:141] libmachine: (flannel-453036) DBG | Closing plugin on server side
	I0717 01:52:19.176696   80566 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:19.176699   80566 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:19.176710   80566 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:19.176719   80566 main.go:141] libmachine: (flannel-453036) Calling .Close
	I0717 01:52:19.178390   80566 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:19.178414   80566 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:19.178437   80566 main.go:141] libmachine: (flannel-453036) DBG | Closing plugin on server side
	I0717 01:52:19.200233   80566 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:19.200262   80566 main.go:141] libmachine: (flannel-453036) Calling .Close
	I0717 01:52:19.200705   80566 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:19.200720   80566 main.go:141] libmachine: (flannel-453036) DBG | Closing plugin on server side
	I0717 01:52:19.200725   80566 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:19.202460   80566 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 01:52:19.520157   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:22.016203   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:19.362439   82514 crio.go:462] duration metric: took 1.486304049s to copy over tarball
	I0717 01:52:19.362534   82514 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:52:21.812852   82514 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.450291273s)
	I0717 01:52:21.812875   82514 crio.go:469] duration metric: took 2.450409022s to extract the tarball
	I0717 01:52:21.812882   82514 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:52:21.850854   82514 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:52:21.905593   82514 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:52:21.905620   82514 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:52:21.905629   82514 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.30.2 crio true true} ...
	I0717 01:52:21.905743   82514 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-453036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:bridge-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0717 01:52:21.905821   82514 ssh_runner.go:195] Run: crio config
	I0717 01:52:21.967098   82514 cni.go:84] Creating CNI manager for "bridge"
	I0717 01:52:21.967133   82514 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:52:21.967167   82514 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-453036 NodeName:bridge-453036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:52:21.967340   82514 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-453036"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:52:21.967406   82514 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:52:21.978767   82514 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:52:21.978847   82514 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:52:21.990200   82514 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 01:52:22.010193   82514 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:52:22.030805   82514 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 01:52:22.052103   82514 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I0717 01:52:22.056338   82514 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:52:22.070000   82514 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:52:22.212445   82514 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:52:22.231446   82514 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036 for IP: 192.168.72.138
	I0717 01:52:22.231470   82514 certs.go:194] generating shared ca certs ...
	I0717 01:52:22.231488   82514 certs.go:226] acquiring lock for ca certs: {Name:mkf91c55409ea76cfdc37f3e8e02a9296791b311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:22.231653   82514 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key
	I0717 01:52:22.231702   82514 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key
	I0717 01:52:22.231714   82514 certs.go:256] generating profile certs ...
	I0717 01:52:22.231788   82514 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.key
	I0717 01:52:22.231807   82514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt with IP's: []
	I0717 01:52:22.334411   82514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt ...
	I0717 01:52:22.334444   82514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: {Name:mk0b167ec76d629221cea48800562fb605f4d14b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:22.334634   82514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.key ...
	I0717 01:52:22.334651   82514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.key: {Name:mkfffea9ac855595a8658ee65f00a841891eb8ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:22.334765   82514 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.key.a9ed728b
	I0717 01:52:22.334786   82514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.crt.a9ed728b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.138]
	I0717 01:52:22.508469   82514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.crt.a9ed728b ...
	I0717 01:52:22.508502   82514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.crt.a9ed728b: {Name:mk6441ebd5b2b63862ba6006fc30e91b81607b26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:22.508721   82514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.key.a9ed728b ...
	I0717 01:52:22.508741   82514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.key.a9ed728b: {Name:mk667afdddf77dd7e8051c87aa5533798010da9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:22.508873   82514 certs.go:381] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.crt.a9ed728b -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.crt
	I0717 01:52:22.508998   82514 certs.go:385] copying /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.key.a9ed728b -> /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.key
	I0717 01:52:22.509088   82514 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/proxy-client.key
	I0717 01:52:22.509111   82514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/proxy-client.crt with IP's: []
	I0717 01:52:22.807571   82514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/proxy-client.crt ...
	I0717 01:52:22.807597   82514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/proxy-client.crt: {Name:mka561a7d118a4a06a5e64b4ed8829af24d4bcfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:22.807777   82514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/proxy-client.key ...
	I0717 01:52:22.807790   82514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/proxy-client.key: {Name:mkce16b40ef6a04fdcaf25312f958a0d2f9bdf8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:22.808012   82514 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem (1338 bytes)
	W0717 01:52:22.808062   82514 certs.go:480] ignoring /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068_empty.pem, impossibly tiny 0 bytes
	I0717 01:52:22.808077   82514 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 01:52:22.808106   82514 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/ca.pem (1082 bytes)
	I0717 01:52:22.808151   82514 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:52:22.808188   82514 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/certs/key.pem (1675 bytes)
	I0717 01:52:22.808242   82514 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem (1708 bytes)
	I0717 01:52:22.808936   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:52:22.842328   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:52:22.875742   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:52:22.911030   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 01:52:22.953190   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 01:52:22.991473   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:52:23.016822   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:52:23.041378   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:52:23.066721   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/ssl/certs/200682.pem --> /usr/share/ca-certificates/200682.pem (1708 bytes)
	I0717 01:52:23.100034   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:52:23.135107   82514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12897/.minikube/certs/20068.pem --> /usr/share/ca-certificates/20068.pem (1338 bytes)
	I0717 01:52:23.160791   82514 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:52:23.177471   82514 ssh_runner.go:195] Run: openssl version
	I0717 01:52:23.183504   82514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200682.pem && ln -fs /usr/share/ca-certificates/200682.pem /etc/ssl/certs/200682.pem"
	I0717 01:52:23.195847   82514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200682.pem
	I0717 01:52:23.200315   82514 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:16 /usr/share/ca-certificates/200682.pem
	I0717 01:52:23.200390   82514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200682.pem
	I0717 01:52:23.206275   82514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200682.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:52:23.218056   82514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:52:23.230321   82514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:52:23.234768   82514 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:52:23.234818   82514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:52:23.240655   82514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:52:23.252774   82514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20068.pem && ln -fs /usr/share/ca-certificates/20068.pem /etc/ssl/certs/20068.pem"
	I0717 01:52:23.265226   82514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20068.pem
	I0717 01:52:23.270667   82514 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:16 /usr/share/ca-certificates/20068.pem
	I0717 01:52:23.270713   82514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20068.pem
	I0717 01:52:23.276367   82514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20068.pem /etc/ssl/certs/51391683.0"
	I0717 01:52:23.289317   82514 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:52:23.293907   82514 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:52:23.293962   82514 kubeadm.go:392] StartCluster: {Name:bridge-453036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:bridge-453036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:52:23.294055   82514 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:52:23.294120   82514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:52:23.336597   82514 cri.go:89] found id: ""
	I0717 01:52:23.336671   82514 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:52:23.348901   82514 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:52:23.359259   82514 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:52:23.369740   82514 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:52:23.369761   82514 kubeadm.go:157] found existing configuration files:
	
	I0717 01:52:23.369810   82514 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:52:23.382895   82514 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:52:23.382966   82514 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:52:23.395846   82514 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:52:23.406795   82514 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:52:23.406866   82514 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:52:23.417760   82514 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:52:23.426969   82514 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:52:23.427026   82514 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:52:23.436597   82514 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:52:23.446509   82514 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:52:23.446567   82514 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:52:23.463935   82514 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:52:23.539471   82514 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:52:23.539589   82514 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:52:23.680933   82514 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:52:23.681144   82514 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:52:23.681274   82514 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:52:23.932675   82514 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:52:19.203846   80566 addons.go:510] duration metric: took 1.863426011s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 01:52:20.408538   80566 node_ready.go:53] node "flannel-453036" has status "Ready":"False"
	I0717 01:52:22.408784   80566 node_ready.go:53] node "flannel-453036" has status "Ready":"False"
	I0717 01:52:24.043988   82514 out.go:204]   - Generating certificates and keys ...
	I0717 01:52:24.044097   82514 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:52:24.044173   82514 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:52:24.178798   82514 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:52:24.241727   82514 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:52:24.391507   82514 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:52:24.702638   82514 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:52:25.057560   82514 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:52:25.057698   82514 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-453036 localhost] and IPs [192.168.72.138 127.0.0.1 ::1]
	I0717 01:52:25.179582   82514 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:52:25.179799   82514 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-453036 localhost] and IPs [192.168.72.138 127.0.0.1 ::1]
	I0717 01:52:25.257811   82514 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:52:25.444269   82514 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:52:25.733103   82514 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:52:25.733196   82514 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:52:25.986056   82514 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:52:26.152308   82514 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:52:26.405298   82514 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:52:26.617038   82514 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:52:26.758741   82514 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:52:26.759410   82514 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:52:26.762832   82514 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:52:24.016254   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:26.016548   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:26.765122   82514 out.go:204]   - Booting up control plane ...
	I0717 01:52:26.765253   82514 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:52:26.765353   82514 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:52:26.765438   82514 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:52:26.783986   82514 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:52:26.784893   82514 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:52:26.784976   82514 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:52:26.923644   82514 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:52:26.923723   82514 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:52:27.925376   82514 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00264865s
	I0717 01:52:27.925477   82514 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:52:24.647986   80566 node_ready.go:53] node "flannel-453036" has status "Ready":"False"
	I0717 01:52:25.907932   80566 node_ready.go:49] node "flannel-453036" has status "Ready":"True"
	I0717 01:52:25.907957   80566 node_ready.go:38] duration metric: took 7.503513396s for node "flannel-453036" to be "Ready" ...
	I0717 01:52:25.907965   80566 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:52:25.916129   80566 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:27.924704   80566 pod_ready.go:102] pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:28.514821   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:30.516141   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:32.925741   82514 kubeadm.go:310] [api-check] The API server is healthy after 5.002361148s
	I0717 01:52:32.944614   82514 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 01:52:32.961071   82514 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 01:52:32.991939   82514 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 01:52:32.992196   82514 kubeadm.go:310] [mark-control-plane] Marking the node bridge-453036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 01:52:33.004908   82514 kubeadm.go:310] [bootstrap-token] Using token: 1gcmwt.c1v5kdi9pj4vwamz
	I0717 01:52:33.006187   82514 out.go:204]   - Configuring RBAC rules ...
	I0717 01:52:33.006309   82514 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 01:52:33.027677   82514 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 01:52:33.059295   82514 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 01:52:33.067184   82514 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 01:52:33.071918   82514 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 01:52:33.075725   82514 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 01:52:33.336000   82514 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 01:52:29.924968   80566 pod_ready.go:102] pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:32.422325   80566 pod_ready.go:102] pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:33.792884   82514 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 01:52:34.338799   82514 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 01:52:34.340060   82514 kubeadm.go:310] 
	I0717 01:52:34.340151   82514 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 01:52:34.340164   82514 kubeadm.go:310] 
	I0717 01:52:34.340285   82514 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 01:52:34.340306   82514 kubeadm.go:310] 
	I0717 01:52:34.340360   82514 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 01:52:34.340447   82514 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 01:52:34.340529   82514 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 01:52:34.340547   82514 kubeadm.go:310] 
	I0717 01:52:34.340623   82514 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 01:52:34.340631   82514 kubeadm.go:310] 
	I0717 01:52:34.340680   82514 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 01:52:34.340687   82514 kubeadm.go:310] 
	I0717 01:52:34.340728   82514 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 01:52:34.340792   82514 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 01:52:34.340863   82514 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 01:52:34.340871   82514 kubeadm.go:310] 
	I0717 01:52:34.340966   82514 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 01:52:34.341066   82514 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 01:52:34.341077   82514 kubeadm.go:310] 
	I0717 01:52:34.341181   82514 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1gcmwt.c1v5kdi9pj4vwamz \
	I0717 01:52:34.341289   82514 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 \
	I0717 01:52:34.341313   82514 kubeadm.go:310] 	--control-plane 
	I0717 01:52:34.341322   82514 kubeadm.go:310] 
	I0717 01:52:34.341423   82514 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 01:52:34.341433   82514 kubeadm.go:310] 
	I0717 01:52:34.341504   82514 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1gcmwt.c1v5kdi9pj4vwamz \
	I0717 01:52:34.341589   82514 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b3605c9b3057b0271572b1da9a1b6fc60a70f57587e3c8c3005e4dfcbab6ce95 
	I0717 01:52:34.342299   82514 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:52:34.342392   82514 cni.go:84] Creating CNI manager for "bridge"
	I0717 01:52:34.343628   82514 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:52:33.017476   79788 pod_ready.go:102] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:35.515329   79788 pod_ready.go:92] pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:35.515355   79788 pod_ready.go:81] duration metric: took 39.506908935s for pod "coredns-7db6d8ff4d-99ngw" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.515371   79788 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-zzhrj" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.517423   79788 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-zzhrj" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zzhrj" not found
	I0717 01:52:35.517447   79788 pod_ready.go:81] duration metric: took 2.066473ms for pod "coredns-7db6d8ff4d-zzhrj" in "kube-system" namespace to be "Ready" ...
	E0717 01:52:35.517458   79788 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-zzhrj" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zzhrj" not found
	I0717 01:52:35.517467   79788 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.522926   79788 pod_ready.go:92] pod "etcd-enable-default-cni-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:35.522941   79788 pod_ready.go:81] duration metric: took 5.464548ms for pod "etcd-enable-default-cni-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.522949   79788 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.527673   79788 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:35.527691   79788 pod_ready.go:81] duration metric: took 4.735302ms for pod "kube-apiserver-enable-default-cni-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.527702   79788 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.532363   79788 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:35.532382   79788 pod_ready.go:81] duration metric: took 4.671846ms for pod "kube-controller-manager-enable-default-cni-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.532392   79788 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-t7v2s" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.712811   79788 pod_ready.go:92] pod "kube-proxy-t7v2s" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:35.712835   79788 pod_ready.go:81] duration metric: took 180.435916ms for pod "kube-proxy-t7v2s" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:35.712846   79788 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:36.113360   79788 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:36.113380   79788 pod_ready.go:81] duration metric: took 400.527214ms for pod "kube-scheduler-enable-default-cni-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:36.113387   79788 pod_ready.go:38] duration metric: took 40.125389914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:52:36.113401   79788 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:52:36.113451   79788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:52:36.131164   79788 api_server.go:72] duration metric: took 41.135823247s to wait for apiserver process to appear ...
	I0717 01:52:36.131198   79788 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:52:36.131229   79788 api_server.go:253] Checking apiserver healthz at https://192.168.50.111:8443/healthz ...
	I0717 01:52:36.137634   79788 api_server.go:279] https://192.168.50.111:8443/healthz returned 200:
	ok
	I0717 01:52:36.139529   79788 api_server.go:141] control plane version: v1.30.2
	I0717 01:52:36.139549   79788 api_server.go:131] duration metric: took 8.34542ms to wait for apiserver health ...
	I0717 01:52:36.139557   79788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:52:36.315114   79788 system_pods.go:59] 7 kube-system pods found
	I0717 01:52:36.315153   79788 system_pods.go:61] "coredns-7db6d8ff4d-99ngw" [ad3617cc-5678-45d1-8266-596c9f7447b7] Running
	I0717 01:52:36.315158   79788 system_pods.go:61] "etcd-enable-default-cni-453036" [da9dca0a-4da8-44c9-a4ff-d28dd6ffe088] Running
	I0717 01:52:36.315163   79788 system_pods.go:61] "kube-apiserver-enable-default-cni-453036" [d087338f-1ec4-4c5e-81c5-e68236da8edc] Running
	I0717 01:52:36.315167   79788 system_pods.go:61] "kube-controller-manager-enable-default-cni-453036" [e0c73281-bad4-4c90-99b8-2422cc68d687] Running
	I0717 01:52:36.315170   79788 system_pods.go:61] "kube-proxy-t7v2s" [83b3c91b-c4cf-46ee-a4cc-993e9795f92e] Running
	I0717 01:52:36.315173   79788 system_pods.go:61] "kube-scheduler-enable-default-cni-453036" [7edc7211-d3e7-4a9a-9e21-179e8e15597b] Running
	I0717 01:52:36.315177   79788 system_pods.go:61] "storage-provisioner" [984731ab-d3c3-4e91-ae9a-a6f93568ff52] Running
	I0717 01:52:36.315183   79788 system_pods.go:74] duration metric: took 175.620922ms to wait for pod list to return data ...
	I0717 01:52:36.315191   79788 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:52:36.512595   79788 default_sa.go:45] found service account: "default"
	I0717 01:52:36.512628   79788 default_sa.go:55] duration metric: took 197.429078ms for default service account to be created ...
	I0717 01:52:36.512639   79788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:52:36.716627   79788 system_pods.go:86] 7 kube-system pods found
	I0717 01:52:36.716660   79788 system_pods.go:89] "coredns-7db6d8ff4d-99ngw" [ad3617cc-5678-45d1-8266-596c9f7447b7] Running
	I0717 01:52:36.716669   79788 system_pods.go:89] "etcd-enable-default-cni-453036" [da9dca0a-4da8-44c9-a4ff-d28dd6ffe088] Running
	I0717 01:52:36.716680   79788 system_pods.go:89] "kube-apiserver-enable-default-cni-453036" [d087338f-1ec4-4c5e-81c5-e68236da8edc] Running
	I0717 01:52:36.716687   79788 system_pods.go:89] "kube-controller-manager-enable-default-cni-453036" [e0c73281-bad4-4c90-99b8-2422cc68d687] Running
	I0717 01:52:36.716700   79788 system_pods.go:89] "kube-proxy-t7v2s" [83b3c91b-c4cf-46ee-a4cc-993e9795f92e] Running
	I0717 01:52:36.716707   79788 system_pods.go:89] "kube-scheduler-enable-default-cni-453036" [7edc7211-d3e7-4a9a-9e21-179e8e15597b] Running
	I0717 01:52:36.716713   79788 system_pods.go:89] "storage-provisioner" [984731ab-d3c3-4e91-ae9a-a6f93568ff52] Running
	I0717 01:52:36.716722   79788 system_pods.go:126] duration metric: took 204.075471ms to wait for k8s-apps to be running ...
	I0717 01:52:36.716730   79788 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:52:36.716788   79788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:52:36.731840   79788 system_svc.go:56] duration metric: took 15.100847ms WaitForService to wait for kubelet
	I0717 01:52:36.731867   79788 kubeadm.go:582] duration metric: took 41.736530733s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:52:36.731886   79788 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:52:36.912283   79788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:52:36.912311   79788 node_conditions.go:123] node cpu capacity is 2
	I0717 01:52:36.912326   79788 node_conditions.go:105] duration metric: took 180.435025ms to run NodePressure ...
	I0717 01:52:36.912338   79788 start.go:241] waiting for startup goroutines ...
	I0717 01:52:36.912347   79788 start.go:246] waiting for cluster config update ...
	I0717 01:52:36.912358   79788 start.go:255] writing updated cluster config ...
	I0717 01:52:36.912622   79788 ssh_runner.go:195] Run: rm -f paused
	I0717 01:52:36.961585   79788 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:52:36.963533   79788 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-453036" cluster and "default" namespace by default
	I0717 01:52:34.345061   82514 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:52:34.356370   82514 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:52:34.375369   82514 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:52:34.375446   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:34.375494   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-453036 minikube.k8s.io/updated_at=2024_07_17T01_52_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=bridge-453036 minikube.k8s.io/primary=true
	I0717 01:52:34.408510   82514 ops.go:34] apiserver oom_adj: -16
	I0717 01:52:34.512456   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:35.013235   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:35.512664   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:36.013291   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:36.512759   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:37.012728   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:37.513309   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:38.012514   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:34.423080   80566 pod_ready.go:102] pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:36.922837   80566 pod_ready.go:102] pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:38.933678   80566 pod_ready.go:102] pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:39.424114   80566 pod_ready.go:92] pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:39.424143   80566 pod_ready.go:81] duration metric: took 13.507980679s for pod "coredns-7db6d8ff4d-846wb" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.424157   80566 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.429495   80566 pod_ready.go:92] pod "etcd-flannel-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:39.429518   80566 pod_ready.go:81] duration metric: took 5.350681ms for pod "etcd-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.429534   80566 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.434544   80566 pod_ready.go:92] pod "kube-apiserver-flannel-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:39.434564   80566 pod_ready.go:81] duration metric: took 5.021216ms for pod "kube-apiserver-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.434575   80566 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.438766   80566 pod_ready.go:92] pod "kube-controller-manager-flannel-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:39.438783   80566 pod_ready.go:81] duration metric: took 4.201042ms for pod "kube-controller-manager-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.438791   80566 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-b5xhd" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.443413   80566 pod_ready.go:92] pod "kube-proxy-b5xhd" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:39.443433   80566 pod_ready.go:81] duration metric: took 4.637185ms for pod "kube-proxy-b5xhd" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.443442   80566 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.821563   80566 pod_ready.go:92] pod "kube-scheduler-flannel-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:39.821591   80566 pod_ready.go:81] duration metric: took 378.141392ms for pod "kube-scheduler-flannel-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:39.821606   80566 pod_ready.go:38] duration metric: took 13.913630222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:52:39.821622   80566 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:52:39.821682   80566 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:52:39.839787   80566 api_server.go:72] duration metric: took 22.499493526s to wait for apiserver process to appear ...
	I0717 01:52:39.839815   80566 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:52:39.839836   80566 api_server.go:253] Checking apiserver healthz at https://192.168.61.173:8443/healthz ...
	I0717 01:52:39.844390   80566 api_server.go:279] https://192.168.61.173:8443/healthz returned 200:
	ok
	I0717 01:52:39.845647   80566 api_server.go:141] control plane version: v1.30.2
	I0717 01:52:39.845674   80566 api_server.go:131] duration metric: took 5.852115ms to wait for apiserver health ...
	I0717 01:52:39.845683   80566 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:52:40.024726   80566 system_pods.go:59] 7 kube-system pods found
	I0717 01:52:40.024758   80566 system_pods.go:61] "coredns-7db6d8ff4d-846wb" [21affa6c-03b7-4a09-acd1-99b843397c36] Running
	I0717 01:52:40.024763   80566 system_pods.go:61] "etcd-flannel-453036" [7a5b42d7-361a-4b1e-8eba-2f91b13dcfd1] Running
	I0717 01:52:40.024767   80566 system_pods.go:61] "kube-apiserver-flannel-453036" [8cf44724-4f55-4629-8392-591ae46ae409] Running
	I0717 01:52:40.024770   80566 system_pods.go:61] "kube-controller-manager-flannel-453036" [d51b9891-1313-4ca4-99e8-5bd70b5a7a4c] Running
	I0717 01:52:40.024773   80566 system_pods.go:61] "kube-proxy-b5xhd" [d8a287b0-84ba-448d-8703-eb3b840375cc] Running
	I0717 01:52:40.024776   80566 system_pods.go:61] "kube-scheduler-flannel-453036" [98ef4284-bd51-47b5-a82f-b1e84983332e] Running
	I0717 01:52:40.024779   80566 system_pods.go:61] "storage-provisioner" [7f32bc9c-cb77-42fa-af28-7bb1564d71af] Running
	I0717 01:52:40.024783   80566 system_pods.go:74] duration metric: took 179.092737ms to wait for pod list to return data ...
	I0717 01:52:40.024789   80566 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:52:40.221209   80566 default_sa.go:45] found service account: "default"
	I0717 01:52:40.221240   80566 default_sa.go:55] duration metric: took 196.444442ms for default service account to be created ...
	I0717 01:52:40.221252   80566 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:52:40.422808   80566 system_pods.go:86] 7 kube-system pods found
	I0717 01:52:40.422841   80566 system_pods.go:89] "coredns-7db6d8ff4d-846wb" [21affa6c-03b7-4a09-acd1-99b843397c36] Running
	I0717 01:52:40.422847   80566 system_pods.go:89] "etcd-flannel-453036" [7a5b42d7-361a-4b1e-8eba-2f91b13dcfd1] Running
	I0717 01:52:40.422852   80566 system_pods.go:89] "kube-apiserver-flannel-453036" [8cf44724-4f55-4629-8392-591ae46ae409] Running
	I0717 01:52:40.422858   80566 system_pods.go:89] "kube-controller-manager-flannel-453036" [d51b9891-1313-4ca4-99e8-5bd70b5a7a4c] Running
	I0717 01:52:40.422865   80566 system_pods.go:89] "kube-proxy-b5xhd" [d8a287b0-84ba-448d-8703-eb3b840375cc] Running
	I0717 01:52:40.422870   80566 system_pods.go:89] "kube-scheduler-flannel-453036" [98ef4284-bd51-47b5-a82f-b1e84983332e] Running
	I0717 01:52:40.422876   80566 system_pods.go:89] "storage-provisioner" [7f32bc9c-cb77-42fa-af28-7bb1564d71af] Running
	I0717 01:52:40.422885   80566 system_pods.go:126] duration metric: took 201.626702ms to wait for k8s-apps to be running ...
	I0717 01:52:40.422898   80566 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:52:40.422947   80566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:52:40.438989   80566 system_svc.go:56] duration metric: took 16.082056ms WaitForService to wait for kubelet
	I0717 01:52:40.439024   80566 kubeadm.go:582] duration metric: took 23.098734958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:52:40.439050   80566 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:52:40.622506   80566 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:52:40.622543   80566 node_conditions.go:123] node cpu capacity is 2
	I0717 01:52:40.622557   80566 node_conditions.go:105] duration metric: took 183.499683ms to run NodePressure ...
	I0717 01:52:40.622572   80566 start.go:241] waiting for startup goroutines ...
	I0717 01:52:40.622581   80566 start.go:246] waiting for cluster config update ...
	I0717 01:52:40.622594   80566 start.go:255] writing updated cluster config ...
	I0717 01:52:40.622916   80566 ssh_runner.go:195] Run: rm -f paused
	I0717 01:52:40.696014   80566 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:52:40.697909   80566 out.go:177] * Done! kubectl is now configured to use "flannel-453036" cluster and "default" namespace by default
	I0717 01:52:38.513427   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:39.013149   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:39.513093   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:40.013304   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:40.513181   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:41.012693   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:41.512472   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:42.013060   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:42.513310   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:43.012706   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:43.512671   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:44.013078   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:44.513025   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:45.012610   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:45.513004   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:46.013454   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:46.512464   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:47.012482   82514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 01:52:47.137468   82514 kubeadm.go:1113] duration metric: took 12.762078842s to wait for elevateKubeSystemPrivileges
	I0717 01:52:47.137500   82514 kubeadm.go:394] duration metric: took 23.843540407s to StartCluster
	I0717 01:52:47.137521   82514 settings.go:142] acquiring lock: {Name:mk79e383b67f93b97e5e2314cff4a1a88322d4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:47.137582   82514 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:52:47.139029   82514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/kubeconfig: {Name:mk2c801a2d4c5e427579d1f439221e33e8a6f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:52:47.139216   82514 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:52:47.139298   82514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 01:52:47.139353   82514 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:52:47.139422   82514 addons.go:69] Setting storage-provisioner=true in profile "bridge-453036"
	I0717 01:52:47.139448   82514 addons.go:234] Setting addon storage-provisioner=true in "bridge-453036"
	I0717 01:52:47.139476   82514 host.go:66] Checking if "bridge-453036" exists ...
	I0717 01:52:47.139510   82514 config.go:182] Loaded profile config "bridge-453036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:52:47.139569   82514 addons.go:69] Setting default-storageclass=true in profile "bridge-453036"
	I0717 01:52:47.139613   82514 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-453036"
	I0717 01:52:47.139899   82514 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:47.139939   82514 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:47.140066   82514 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:47.140097   82514 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:47.141003   82514 out.go:177] * Verifying Kubernetes components...
	I0717 01:52:47.142489   82514 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:52:47.158085   82514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I0717 01:52:47.158570   82514 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:47.159124   82514 main.go:141] libmachine: Using API Version  1
	I0717 01:52:47.159153   82514 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:47.159540   82514 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:47.159633   82514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0717 01:52:47.160030   82514 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:47.160036   82514 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:47.160094   82514 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:47.160456   82514 main.go:141] libmachine: Using API Version  1
	I0717 01:52:47.160479   82514 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:47.160819   82514 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:47.160994   82514 main.go:141] libmachine: (bridge-453036) Calling .GetState
	I0717 01:52:47.164293   82514 addons.go:234] Setting addon default-storageclass=true in "bridge-453036"
	I0717 01:52:47.164323   82514 host.go:66] Checking if "bridge-453036" exists ...
	I0717 01:52:47.164549   82514 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:47.164586   82514 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:47.176682   82514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0717 01:52:47.177204   82514 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:47.177775   82514 main.go:141] libmachine: Using API Version  1
	I0717 01:52:47.177798   82514 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:47.178193   82514 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:47.178387   82514 main.go:141] libmachine: (bridge-453036) Calling .GetState
	I0717 01:52:47.180068   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:47.181386   82514 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:52:47.181424   82514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
	I0717 01:52:47.182010   82514 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:47.182484   82514 main.go:141] libmachine: Using API Version  1
	I0717 01:52:47.182508   82514 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:47.182680   82514 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:52:47.182693   82514 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:52:47.182704   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:47.183164   82514 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:47.183728   82514 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19265-12897/.minikube/bin/docker-machine-driver-kvm2
	I0717 01:52:47.183784   82514 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:52:47.185521   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:47.185878   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:47.185898   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:47.186057   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:47.186175   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:47.186275   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:47.186359   82514 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa Username:docker}
	I0717 01:52:47.202478   82514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39897
	I0717 01:52:47.202925   82514 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:52:47.203388   82514 main.go:141] libmachine: Using API Version  1
	I0717 01:52:47.203407   82514 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:52:47.203805   82514 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:52:47.203965   82514 main.go:141] libmachine: (bridge-453036) Calling .GetState
	I0717 01:52:47.205546   82514 main.go:141] libmachine: (bridge-453036) Calling .DriverName
	I0717 01:52:47.205745   82514 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:52:47.205759   82514 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:52:47.205786   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHHostname
	I0717 01:52:47.208436   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:47.208821   82514 main.go:141] libmachine: (bridge-453036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:dd:f6", ip: ""} in network mk-bridge-453036: {Iface:virbr4 ExpiryTime:2024-07-17 02:52:03 +0000 UTC Type:0 Mac:52:54:00:2d:dd:f6 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-453036 Clientid:01:52:54:00:2d:dd:f6}
	I0717 01:52:47.208841   82514 main.go:141] libmachine: (bridge-453036) DBG | domain bridge-453036 has defined IP address 192.168.72.138 and MAC address 52:54:00:2d:dd:f6 in network mk-bridge-453036
	I0717 01:52:47.208982   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHPort
	I0717 01:52:47.209157   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHKeyPath
	I0717 01:52:47.209280   82514 main.go:141] libmachine: (bridge-453036) Calling .GetSSHUsername
	I0717 01:52:47.209417   82514 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/bridge-453036/id_rsa Username:docker}
	I0717 01:52:47.450023   82514 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:52:47.450068   82514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 01:52:47.487827   82514 node_ready.go:35] waiting up to 15m0s for node "bridge-453036" to be "Ready" ...
	I0717 01:52:47.497297   82514 node_ready.go:49] node "bridge-453036" has status "Ready":"True"
	I0717 01:52:47.497322   82514 node_ready.go:38] duration metric: took 9.456532ms for node "bridge-453036" to be "Ready" ...
	I0717 01:52:47.497333   82514 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:52:47.525704   82514 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-mvjlq" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:47.650143   82514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:52:47.691081   82514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:52:48.136415   82514 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0717 01:52:48.641915   82514 kapi.go:248] "coredns" deployment in "kube-system" namespace and "bridge-453036" context rescaled to 1 replicas
	I0717 01:52:48.721937   82514 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030811882s)
	I0717 01:52:48.722006   82514 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:48.722020   82514 main.go:141] libmachine: (bridge-453036) Calling .Close
	I0717 01:52:48.722258   82514 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.072080199s)
	I0717 01:52:48.722297   82514 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:48.722308   82514 main.go:141] libmachine: (bridge-453036) Calling .Close
	I0717 01:52:48.722384   82514 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:48.722390   82514 main.go:141] libmachine: (bridge-453036) DBG | Closing plugin on server side
	I0717 01:52:48.722393   82514 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:48.722407   82514 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:48.722414   82514 main.go:141] libmachine: (bridge-453036) Calling .Close
	I0717 01:52:48.722708   82514 main.go:141] libmachine: (bridge-453036) DBG | Closing plugin on server side
	I0717 01:52:48.722744   82514 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:48.722752   82514 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:48.722760   82514 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:48.722769   82514 main.go:141] libmachine: (bridge-453036) Calling .Close
	I0717 01:52:48.724814   82514 main.go:141] libmachine: (bridge-453036) DBG | Closing plugin on server side
	I0717 01:52:48.724850   82514 main.go:141] libmachine: (bridge-453036) DBG | Closing plugin on server side
	I0717 01:52:48.724854   82514 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:48.724858   82514 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:48.724866   82514 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:48.724869   82514 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:48.740415   82514 main.go:141] libmachine: Making call to close driver server
	I0717 01:52:48.740437   82514 main.go:141] libmachine: (bridge-453036) Calling .Close
	I0717 01:52:48.740737   82514 main.go:141] libmachine: (bridge-453036) DBG | Closing plugin on server side
	I0717 01:52:48.742382   82514 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:52:48.742402   82514 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:52:48.744064   82514 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 01:52:48.745203   82514 addons.go:510] duration metric: took 1.605837294s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 01:52:49.533095   82514 pod_ready.go:102] pod "coredns-7db6d8ff4d-mvjlq" in "kube-system" namespace has status "Ready":"False"
	I0717 01:52:50.032252   82514 pod_ready.go:92] pod "coredns-7db6d8ff4d-mvjlq" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:50.032279   82514 pod_ready.go:81] duration metric: took 2.506542751s for pod "coredns-7db6d8ff4d-mvjlq" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.032292   82514 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-tfkrg" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.036777   82514 pod_ready.go:92] pod "coredns-7db6d8ff4d-tfkrg" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:50.036800   82514 pod_ready.go:81] duration metric: took 4.500929ms for pod "coredns-7db6d8ff4d-tfkrg" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.036811   82514 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.040920   82514 pod_ready.go:92] pod "etcd-bridge-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:50.040940   82514 pod_ready.go:81] duration metric: took 4.121711ms for pod "etcd-bridge-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.040952   82514 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.045991   82514 pod_ready.go:92] pod "kube-apiserver-bridge-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:50.046007   82514 pod_ready.go:81] duration metric: took 5.048058ms for pod "kube-apiserver-bridge-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.046017   82514 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.051078   82514 pod_ready.go:92] pod "kube-controller-manager-bridge-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:50.051097   82514 pod_ready.go:81] duration metric: took 5.072398ms for pod "kube-controller-manager-bridge-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.051105   82514 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-fvr9j" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.430726   82514 pod_ready.go:92] pod "kube-proxy-fvr9j" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:50.430755   82514 pod_ready.go:81] duration metric: took 379.642044ms for pod "kube-proxy-fvr9j" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.430770   82514 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.829168   82514 pod_ready.go:92] pod "kube-scheduler-bridge-453036" in "kube-system" namespace has status "Ready":"True"
	I0717 01:52:50.829194   82514 pod_ready.go:81] duration metric: took 398.415453ms for pod "kube-scheduler-bridge-453036" in "kube-system" namespace to be "Ready" ...
	I0717 01:52:50.829205   82514 pod_ready.go:38] duration metric: took 3.331859105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:52:50.829230   82514 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:52:50.829296   82514 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:52:50.847931   82514 api_server.go:72] duration metric: took 3.708688325s to wait for apiserver process to appear ...
	I0717 01:52:50.847958   82514 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:52:50.847981   82514 api_server.go:253] Checking apiserver healthz at https://192.168.72.138:8443/healthz ...
	I0717 01:52:50.852318   82514 api_server.go:279] https://192.168.72.138:8443/healthz returned 200:
	ok
	I0717 01:52:50.853640   82514 api_server.go:141] control plane version: v1.30.2
	I0717 01:52:50.853700   82514 api_server.go:131] duration metric: took 5.733333ms to wait for apiserver health ...
	I0717 01:52:50.853715   82514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:52:51.033213   82514 system_pods.go:59] 8 kube-system pods found
	I0717 01:52:51.033258   82514 system_pods.go:61] "coredns-7db6d8ff4d-mvjlq" [8c36c459-2d10-4840-8593-01bb1df03b5a] Running
	I0717 01:52:51.033267   82514 system_pods.go:61] "coredns-7db6d8ff4d-tfkrg" [99e60a7b-7fd9-4aaa-9b65-6f0d17af980e] Running
	I0717 01:52:51.033272   82514 system_pods.go:61] "etcd-bridge-453036" [bc1d913e-0679-4ef0-a6ed-2b8caa7207a5] Running
	I0717 01:52:51.033277   82514 system_pods.go:61] "kube-apiserver-bridge-453036" [0f978289-2298-4b91-9202-38ad1345cb8f] Running
	I0717 01:52:51.033282   82514 system_pods.go:61] "kube-controller-manager-bridge-453036" [7d4b2704-efac-4434-b084-eae28e4b1edf] Running
	I0717 01:52:51.033287   82514 system_pods.go:61] "kube-proxy-fvr9j" [afb244e1-8921-42bc-902e-0b31bb6e253d] Running
	I0717 01:52:51.033293   82514 system_pods.go:61] "kube-scheduler-bridge-453036" [e769d3e6-0749-496e-b8c1-3bcb72e12b09] Running
	I0717 01:52:51.033300   82514 system_pods.go:61] "storage-provisioner" [e90991d0-5f84-4133-b221-650189e5a988] Running
	I0717 01:52:51.033307   82514 system_pods.go:74] duration metric: took 179.58531ms to wait for pod list to return data ...
	I0717 01:52:51.033319   82514 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:52:51.229823   82514 default_sa.go:45] found service account: "default"
	I0717 01:52:51.229848   82514 default_sa.go:55] duration metric: took 196.523327ms for default service account to be created ...
	I0717 01:52:51.229857   82514 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:52:51.433301   82514 system_pods.go:86] 8 kube-system pods found
	I0717 01:52:51.433335   82514 system_pods.go:89] "coredns-7db6d8ff4d-mvjlq" [8c36c459-2d10-4840-8593-01bb1df03b5a] Running
	I0717 01:52:51.433344   82514 system_pods.go:89] "coredns-7db6d8ff4d-tfkrg" [99e60a7b-7fd9-4aaa-9b65-6f0d17af980e] Running
	I0717 01:52:51.433350   82514 system_pods.go:89] "etcd-bridge-453036" [bc1d913e-0679-4ef0-a6ed-2b8caa7207a5] Running
	I0717 01:52:51.433357   82514 system_pods.go:89] "kube-apiserver-bridge-453036" [0f978289-2298-4b91-9202-38ad1345cb8f] Running
	I0717 01:52:51.433363   82514 system_pods.go:89] "kube-controller-manager-bridge-453036" [7d4b2704-efac-4434-b084-eae28e4b1edf] Running
	I0717 01:52:51.433369   82514 system_pods.go:89] "kube-proxy-fvr9j" [afb244e1-8921-42bc-902e-0b31bb6e253d] Running
	I0717 01:52:51.433374   82514 system_pods.go:89] "kube-scheduler-bridge-453036" [e769d3e6-0749-496e-b8c1-3bcb72e12b09] Running
	I0717 01:52:51.433381   82514 system_pods.go:89] "storage-provisioner" [e90991d0-5f84-4133-b221-650189e5a988] Running
	I0717 01:52:51.433390   82514 system_pods.go:126] duration metric: took 203.525893ms to wait for k8s-apps to be running ...
	I0717 01:52:51.433412   82514 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:52:51.433465   82514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:52:51.450780   82514 system_svc.go:56] duration metric: took 17.366995ms WaitForService to wait for kubelet
	I0717 01:52:51.450807   82514 kubeadm.go:582] duration metric: took 4.311570292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:52:51.450825   82514 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:52:51.630595   82514 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:52:51.630625   82514 node_conditions.go:123] node cpu capacity is 2
	I0717 01:52:51.630637   82514 node_conditions.go:105] duration metric: took 179.807454ms to run NodePressure ...
	I0717 01:52:51.630652   82514 start.go:241] waiting for startup goroutines ...
	I0717 01:52:51.630659   82514 start.go:246] waiting for cluster config update ...
	I0717 01:52:51.630671   82514 start.go:255] writing updated cluster config ...
	I0717 01:52:51.630972   82514 ssh_runner.go:195] Run: rm -f paused
	I0717 01:52:51.678253   82514 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:52:51.680333   82514 out.go:177] * Done! kubectl is now configured to use "bridge-453036" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.067662266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181523067643674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ce24f55-30f4-4f5f-a15e-24587ee46aa1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.068067698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37d2fbd4-2dd6-4891-8784-b83a64960d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.068125893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37d2fbd4-2dd6-4891-8784-b83a64960d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.068304680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721180319723301962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a81affed33178408da2f628642aa7edf3db0831f9a2ca3ccdf06466c131b6b0,PodSandboxId:0753a23b624dfebe5e28d2d417d277c4d28d267e72fb0ee392b128d4d6ae3903,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721180297512914016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1ff7c10-e7aa-4724-afff-9ec2e8657e90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902,PodSandboxId:a080b45de4fc043a6f72102bf260287dc04b127b5dca009791f732a8921f3549,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180296604158470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb91980f-dca7-4dd0-902e-7d1ffac4e1b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180288948147800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571,PodSandboxId:eaac9b90282922f6488de55f788e2bfdbe4c74fccc64678df73dfedf1d3bfd2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721180288904184621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xjgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ab1bff-5791-464d-98a0-041c53c472
34,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc,PodSandboxId:3c2fcb01cef6efaed71ddd2ad0846150979ab49b21a4e382fe48ad08b0cd370f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180284215176238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c65e59014846c76fb9e094d3e44300,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf,PodSandboxId:5d740ec6d82b24619039e83ba0a8a4aa79061c8f59859a7b6fefe4ac00aea3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180284216809145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdac3dcce3429ded2529e5ce29ecbb9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2,PodSandboxId:a2ca2343586d5d0bf54c3f1e2a28f5fa59c0e092e423ba272692822c1ec140bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180284190332833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838a7a6ab42ee7a7484c41d69e5ba22c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4d
a08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e,PodSandboxId:faf56bfdc6714484aed8a106865cee9dc8bc051927831e4faf2dad898f854fdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180284111408114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce7f8e6ea3c381a1e21f86060e22a334,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37d2fbd4-2dd6-4891-8784-b83a64960d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.102964503Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=438bca25-56a8-47f6-9a1f-5e20e3370c90 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.103035301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=438bca25-56a8-47f6-9a1f-5e20e3370c90 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.104196959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddfb1f8e-7208-4d96-94b5-1cab163e5398 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.104597168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181523104516210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddfb1f8e-7208-4d96-94b5-1cab163e5398 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.105062793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72a7d745-95c2-493a-81ed-dfca8c4a7890 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.105129073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72a7d745-95c2-493a-81ed-dfca8c4a7890 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.105325287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721180319723301962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a81affed33178408da2f628642aa7edf3db0831f9a2ca3ccdf06466c131b6b0,PodSandboxId:0753a23b624dfebe5e28d2d417d277c4d28d267e72fb0ee392b128d4d6ae3903,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721180297512914016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1ff7c10-e7aa-4724-afff-9ec2e8657e90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902,PodSandboxId:a080b45de4fc043a6f72102bf260287dc04b127b5dca009791f732a8921f3549,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180296604158470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb91980f-dca7-4dd0-902e-7d1ffac4e1b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180288948147800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571,PodSandboxId:eaac9b90282922f6488de55f788e2bfdbe4c74fccc64678df73dfedf1d3bfd2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721180288904184621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xjgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ab1bff-5791-464d-98a0-041c53c472
34,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc,PodSandboxId:3c2fcb01cef6efaed71ddd2ad0846150979ab49b21a4e382fe48ad08b0cd370f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180284215176238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c65e59014846c76fb9e094d3e44300,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf,PodSandboxId:5d740ec6d82b24619039e83ba0a8a4aa79061c8f59859a7b6fefe4ac00aea3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180284216809145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdac3dcce3429ded2529e5ce29ecbb9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2,PodSandboxId:a2ca2343586d5d0bf54c3f1e2a28f5fa59c0e092e423ba272692822c1ec140bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180284190332833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838a7a6ab42ee7a7484c41d69e5ba22c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4d
a08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e,PodSandboxId:faf56bfdc6714484aed8a106865cee9dc8bc051927831e4faf2dad898f854fdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180284111408114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce7f8e6ea3c381a1e21f86060e22a334,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72a7d745-95c2-493a-81ed-dfca8c4a7890 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.140714295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d38f3d14-0527-4b8a-b9e2-58f406d5ea4a name=/runtime.v1.RuntimeService/Version
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.140802758Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d38f3d14-0527-4b8a-b9e2-58f406d5ea4a name=/runtime.v1.RuntimeService/Version
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.141796054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be39da42-026d-40ee-839b-f822086574a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.142155250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181523142134311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be39da42-026d-40ee-839b-f822086574a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.142637683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad67ee9f-233b-4a15-a997-7774062c6f2c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.142706623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad67ee9f-233b-4a15-a997-7774062c6f2c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.142912992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721180319723301962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a81affed33178408da2f628642aa7edf3db0831f9a2ca3ccdf06466c131b6b0,PodSandboxId:0753a23b624dfebe5e28d2d417d277c4d28d267e72fb0ee392b128d4d6ae3903,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721180297512914016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1ff7c10-e7aa-4724-afff-9ec2e8657e90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902,PodSandboxId:a080b45de4fc043a6f72102bf260287dc04b127b5dca009791f732a8921f3549,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180296604158470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb91980f-dca7-4dd0-902e-7d1ffac4e1b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180288948147800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571,PodSandboxId:eaac9b90282922f6488de55f788e2bfdbe4c74fccc64678df73dfedf1d3bfd2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721180288904184621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xjgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ab1bff-5791-464d-98a0-041c53c472
34,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc,PodSandboxId:3c2fcb01cef6efaed71ddd2ad0846150979ab49b21a4e382fe48ad08b0cd370f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180284215176238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c65e59014846c76fb9e094d3e44300,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf,PodSandboxId:5d740ec6d82b24619039e83ba0a8a4aa79061c8f59859a7b6fefe4ac00aea3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180284216809145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdac3dcce3429ded2529e5ce29ecbb9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2,PodSandboxId:a2ca2343586d5d0bf54c3f1e2a28f5fa59c0e092e423ba272692822c1ec140bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180284190332833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838a7a6ab42ee7a7484c41d69e5ba22c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4d
a08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e,PodSandboxId:faf56bfdc6714484aed8a106865cee9dc8bc051927831e4faf2dad898f854fdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180284111408114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce7f8e6ea3c381a1e21f86060e22a334,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad67ee9f-233b-4a15-a997-7774062c6f2c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.175138839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bd4f304-f215-4c76-8712-896e83ec4571 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.175228736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bd4f304-f215-4c76-8712-896e83ec4571 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.176284037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7dcc6bba-eff3-4f64-a87d-f697928df4d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.176764848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181523176739848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7dcc6bba-eff3-4f64-a87d-f697928df4d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.177229013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7412a844-6b09-4ecf-8914-66d2619d2a93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.177296845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7412a844-6b09-4ecf-8914-66d2619d2a93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:58:43 no-preload-818382 crio[729]: time="2024-07-17 01:58:43.177487759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721180319723301962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a81affed33178408da2f628642aa7edf3db0831f9a2ca3ccdf06466c131b6b0,PodSandboxId:0753a23b624dfebe5e28d2d417d277c4d28d267e72fb0ee392b128d4d6ae3903,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721180297512914016,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c1ff7c10-e7aa-4724-afff-9ec2e8657e90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902,PodSandboxId:a080b45de4fc043a6f72102bf260287dc04b127b5dca009791f732a8921f3549,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180296604158470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-rzhfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb91980f-dca7-4dd0-902e-7d1ffac4e1b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a,PodSandboxId:9dfeec5263456d78b2dd3e3f3bd7c8e345a9b42ec97a074d98d04c756c15b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180288948147800,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
5a0695e-6c38-463e-8f96-60c0e60c7132,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571,PodSandboxId:eaac9b90282922f6488de55f788e2bfdbe4c74fccc64678df73dfedf1d3bfd2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721180288904184621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xjgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ab1bff-5791-464d-98a0-041c53c472
34,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc,PodSandboxId:3c2fcb01cef6efaed71ddd2ad0846150979ab49b21a4e382fe48ad08b0cd370f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180284215176238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c65e59014846c76fb9e094d3e44300,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf,PodSandboxId:5d740ec6d82b24619039e83ba0a8a4aa79061c8f59859a7b6fefe4ac00aea3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180284216809145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdac3dcce3429ded2529e5ce29ecbb9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2,PodSandboxId:a2ca2343586d5d0bf54c3f1e2a28f5fa59c0e092e423ba272692822c1ec140bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180284190332833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838a7a6ab42ee7a7484c41d69e5ba22c,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4d
a08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e,PodSandboxId:faf56bfdc6714484aed8a106865cee9dc8bc051927831e4faf2dad898f854fdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180284111408114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-818382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce7f8e6ea3c381a1e21f86060e22a334,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7412a844-6b09-4ecf-8914-66d2619d2a93 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da9966ff36be8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   9dfeec5263456       storage-provisioner
	0a81affed3317       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   0753a23b624df       busybox
	e8dda478edb70       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   a080b45de4fc0       coredns-5cfdc65f69-rzhfk
	b36943f541e1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   9dfeec5263456       storage-provisioner
	98b3c4a1f8778       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      20 minutes ago      Running             kube-proxy                1                   eaac9b9028292       kube-proxy-7xjgl
	0e68107fbc903       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      20 minutes ago      Running             etcd                      1                   5d740ec6d82b2       etcd-no-preload-818382
	b7e8dfc9eddb7       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      20 minutes ago      Running             kube-scheduler            1                   3c2fcb01cef6e       kube-scheduler-no-preload-818382
	8b3944e69af1a       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      20 minutes ago      Running             kube-apiserver            1                   a2ca2343586d5       kube-apiserver-no-preload-818382
	7a78373ef3f84       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      20 minutes ago      Running             kube-controller-manager   1                   faf56bfdc6714       kube-controller-manager-no-preload-818382
	
	
	==> coredns [e8dda478edb7092e3f600feadbafa3f87a4868c659dd981155c1b533e9ff0902] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47653 - 208 "HINFO IN 3214131708330472645.7751523909791762612. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.088966684s
	
	
	==> describe nodes <==
	Name:               no-preload-818382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-818382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=no-preload-818382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_29_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-818382
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:58:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:53:55 +0000   Wed, 17 Jul 2024 01:29:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:53:55 +0000   Wed, 17 Jul 2024 01:29:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:53:55 +0000   Wed, 17 Jul 2024 01:29:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:53:55 +0000   Wed, 17 Jul 2024 01:38:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    no-preload-818382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fdd83e880e04146b6b0130198304011
	  System UUID:                1fdd83e8-80e0-4146-b6b0-130198304011
	  Boot ID:                    14bdd5e4-b055-48d3-aff1-025d69cecc8a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5cfdc65f69-rzhfk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-818382                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-818382             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-818382    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-7xjgl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-818382             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-78fcd8795b-vgkwg              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-818382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-818382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-818382 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-818382 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-818382 event: Registered Node no-preload-818382 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-818382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-818382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-818382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-818382 event: Registered Node no-preload-818382 in Controller
	
	
	==> dmesg <==
	[Jul17 01:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050139] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040267] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.568491] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.394781] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.592043] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.617968] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059040] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.189107] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.117532] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.270674] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[Jul17 01:38] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.059678] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.054541] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +4.097065] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.492435] systemd-fstab-generator[1934]: Ignoring "noauto" option for root device
	[  +1.544137] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.249732] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [0e68107fbc903649d763805fb3cec827cfee00437ac3d68d656b1ace154c59bf] <==
	{"level":"info","ts":"2024-07-17T01:38:06.258747Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:38:06.259726Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2024-07-17T01:38:06.259824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-17T01:45:00.411262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.421947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:45:00.412191Z","caller":"traceutil/trace.go:171","msg":"trace[1962081311] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:915; }","duration":"369.380681ms","start":"2024-07-17T01:45:00.042743Z","end":"2024-07-17T01:45:00.412123Z","steps":["trace[1962081311] 'range keys from in-memory index tree'  (duration: 368.299375ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:45:00.412336Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:45:00.04271Z","time spent":"369.597603ms","remote":"127.0.0.1:53366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-07-17T01:45:46.947818Z","caller":"traceutil/trace.go:171","msg":"trace[1415028627] transaction","detail":"{read_only:false; response_revision:952; number_of_response:1; }","duration":"110.71159ms","start":"2024-07-17T01:45:46.83678Z","end":"2024-07-17T01:45:46.947491Z","steps":["trace[1415028627] 'process raft request'  (duration: 110.548931ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:48:06.284892Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":820}
	{"level":"info","ts":"2024-07-17T01:48:06.295074Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":820,"took":"9.837231ms","hash":1763417678,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2359296,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T01:48:06.295162Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1763417678,"revision":820,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T01:49:38.61617Z","caller":"traceutil/trace.go:171","msg":"trace[2063164074] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"106.237509ms","start":"2024-07-17T01:49:38.509883Z","end":"2024-07-17T01:49:38.61612Z","steps":["trace[2063164074] 'process raft request'  (duration: 106.116572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:49:39.154591Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.7486ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16202421756594917764 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.38\" mod_revision:1131 > success:<request_put:<key:\"/registry/masterleases/192.168.39.38\" value_size:66 lease:6979049719740141953 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.38\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T01:49:39.154695Z","caller":"traceutil/trace.go:171","msg":"trace[717214407] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"119.33112ms","start":"2024-07-17T01:49:39.035351Z","end":"2024-07-17T01:49:39.154682Z","steps":["trace[717214407] 'process raft request'  (duration: 10.293038ms)","trace[717214407] 'compare'  (duration: 107.645022ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T01:50:39.113829Z","caller":"traceutil/trace.go:171","msg":"trace[701685871] linearizableReadLoop","detail":"{readStateIndex:1375; appliedIndex:1374; }","duration":"104.479596ms","start":"2024-07-17T01:50:39.00931Z","end":"2024-07-17T01:50:39.113789Z","steps":["trace[701685871] 'read index received'  (duration: 104.27682ms)","trace[701685871] 'applied index is now lower than readState.Index'  (duration: 201.751µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:50:39.1141Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.736394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:50:39.114135Z","caller":"traceutil/trace.go:171","msg":"trace[1589320175] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1187; }","duration":"104.820899ms","start":"2024-07-17T01:50:39.009305Z","end":"2024-07-17T01:50:39.114125Z","steps":["trace[1589320175] 'agreement among raft nodes before linearized reading'  (duration: 104.708165ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:51:52.444869Z","caller":"traceutil/trace.go:171","msg":"trace[276597141] transaction","detail":"{read_only:false; response_revision:1247; number_of_response:1; }","duration":"104.080501ms","start":"2024-07-17T01:51:52.340658Z","end":"2024-07-17T01:51:52.444738Z","steps":["trace[276597141] 'process raft request'  (duration: 103.660453ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:51:52.597712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.661965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:51:52.597866Z","caller":"traceutil/trace.go:171","msg":"trace[1144639251] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1247; }","duration":"123.866096ms","start":"2024-07-17T01:51:52.473983Z","end":"2024-07-17T01:51:52.597849Z","steps":["trace[1144639251] 'range keys from in-memory index tree'  (duration: 123.586354ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:53:06.293307Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1062}
	{"level":"info","ts":"2024-07-17T01:53:06.298604Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1062,"took":"4.36474ms","hash":1808181101,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1208320,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-07-17T01:53:06.298722Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1808181101,"revision":1062,"compact-revision":820}
	{"level":"info","ts":"2024-07-17T01:58:06.300609Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1306}
	{"level":"info","ts":"2024-07-17T01:58:06.304265Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1306,"took":"3.349446ms","hash":3967851001,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1187840,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-07-17T01:58:06.304327Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3967851001,"revision":1306,"compact-revision":1062}
	
	
	==> kernel <==
	 01:58:43 up 21 min,  0 users,  load average: 0.07, 0.14, 0.16
	Linux no-preload-818382 5.10.207 #1 SMP Mon Jul 15 14:58:18 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8b3944e69af1a1591e836950db17a9950eea3ca607e41745af06630ce8dabce2] <==
	I0717 01:54:08.685298       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 01:54:08.685350       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:56:08.685695       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:56:08.685804       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0717 01:56:08.685682       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:56:08.685902       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 01:56:08.687245       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 01:56:08.687314       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 01:58:07.683807       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:58:07.683942       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0717 01:58:08.686308       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:58:08.686365       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0717 01:58:08.686396       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 01:58:08.686442       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 01:58:08.687563       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 01:58:08.687606       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7a78373ef3f847abb20811bd1795baf335b1150c190191bca3413ac36434f32e] <==
	E0717 01:53:42.408340       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:53:42.528405       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:53:55.779580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-818382"
	E0717 01:54:12.416141       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:54:12.535898       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 01:54:26.517979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="369.321µs"
	I0717 01:54:41.519626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="226.514µs"
	E0717 01:54:42.422625       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:54:42.548852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:55:12.429110       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:55:12.557653       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:55:42.435642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:55:42.565276       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:56:12.442105       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:56:12.576445       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:56:42.448259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:56:42.592243       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:57:12.455101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:57:12.601124       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:57:42.461611       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:57:42.608052       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:58:12.469609       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:58:12.616970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 01:58:42.476901       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 01:58:42.639273       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [98b3c4a1f8778815a59953e693812765eac0d3095d6515dd549b6cf0a6e8a571] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 01:38:09.244386       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 01:38:09.263258       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.38"]
	E0717 01:38:09.263493       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 01:38:09.342217       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 01:38:09.342297       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:38:09.342351       1 server_linux.go:170] "Using iptables Proxier"
	I0717 01:38:09.345127       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 01:38:09.345474       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 01:38:09.345498       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:38:09.347087       1 config.go:197] "Starting service config controller"
	I0717 01:38:09.347126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:38:09.347147       1 config.go:104] "Starting endpoint slice config controller"
	I0717 01:38:09.347152       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:38:09.348364       1 config.go:326] "Starting node config controller"
	I0717 01:38:09.348394       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:38:09.448271       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:38:09.448622       1 shared_informer.go:320] Caches are synced for node config
	I0717 01:38:09.448337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b7e8dfc9eddb780586b956667187129da6bccb0e6de71996ca7da0f521692cdc] <==
	I0717 01:38:05.445112       1 serving.go:386] Generated self-signed cert in-memory
	W0717 01:38:07.605727       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:38:07.605926       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:38:07.605959       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:38:07.606036       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:38:07.667669       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0717 01:38:07.667715       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:38:07.671434       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:38:07.671606       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:38:07.671645       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:38:07.671669       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0717 01:38:07.771947       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:56:03 no-preload-818382 kubelet[1311]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:56:03 no-preload-818382 kubelet[1311]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:56:04 no-preload-818382 kubelet[1311]: E0717 01:56:04.502141    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:56:18 no-preload-818382 kubelet[1311]: E0717 01:56:18.502201    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:56:32 no-preload-818382 kubelet[1311]: E0717 01:56:32.502317    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:56:43 no-preload-818382 kubelet[1311]: E0717 01:56:43.504398    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:56:54 no-preload-818382 kubelet[1311]: E0717 01:56:54.502226    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:57:03 no-preload-818382 kubelet[1311]: E0717 01:57:03.515890    1311 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:57:03 no-preload-818382 kubelet[1311]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:57:03 no-preload-818382 kubelet[1311]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:57:03 no-preload-818382 kubelet[1311]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:57:03 no-preload-818382 kubelet[1311]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:57:07 no-preload-818382 kubelet[1311]: E0717 01:57:07.503232    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:57:22 no-preload-818382 kubelet[1311]: E0717 01:57:22.501463    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:57:37 no-preload-818382 kubelet[1311]: E0717 01:57:37.501650    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:57:50 no-preload-818382 kubelet[1311]: E0717 01:57:50.502270    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:58:01 no-preload-818382 kubelet[1311]: E0717 01:58:01.505095    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:58:03 no-preload-818382 kubelet[1311]: E0717 01:58:03.516307    1311 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:58:03 no-preload-818382 kubelet[1311]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:58:03 no-preload-818382 kubelet[1311]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:58:03 no-preload-818382 kubelet[1311]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:58:03 no-preload-818382 kubelet[1311]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:58:14 no-preload-818382 kubelet[1311]: E0717 01:58:14.501367    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:58:29 no-preload-818382 kubelet[1311]: E0717 01:58:29.502724    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	Jul 17 01:58:40 no-preload-818382 kubelet[1311]: E0717 01:58:40.501461    1311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-vgkwg" podUID="6386b732-76a6-4744-9215-e4764e08e4e5"
	
	
	==> storage-provisioner [b36943f541e1b1c11514c8270ca9eb12278f0895cb97b3e993403accb7d5c86a] <==
	I0717 01:38:09.072750       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:38:39.076796       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [da9966ff36be870cafddecca67f15c09f780f0669257e5e1cdca231c4df32461] <==
	I0717 01:38:39.821368       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:38:39.836320       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:38:39.836899       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:38:39.855369       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:38:39.855626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-818382_f90f8659-f815-4e1a-8695-25afb52db782!
	I0717 01:38:39.866232       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d37da931-3b24-4588-9d82-4654a10d779a", APIVersion:"v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-818382_f90f8659-f815-4e1a-8695-25afb52db782 became leader
	I0717 01:38:39.956760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-818382_f90f8659-f815-4e1a-8695-25afb52db782!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-818382 -n no-preload-818382
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-818382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-vgkwg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-818382 describe pod metrics-server-78fcd8795b-vgkwg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-818382 describe pod metrics-server-78fcd8795b-vgkwg: exit status 1 (59.277207ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-vgkwg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-818382 describe pod metrics-server-78fcd8795b-vgkwg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (425.84s)

                                                
                                    

Test pass (255/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.88
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.2/json-events 6.25
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.07
18 TestDownloadOnly/v1.30.2/DeleteAll 0.14
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 4.48
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.58
31 TestOffline 124.57
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 138.59
38 TestAddons/parallel/Registry 95.03
40 TestAddons/parallel/InspektorGadget 10.76
42 TestAddons/parallel/HelmTiller 13.07
44 TestAddons/parallel/CSI 60.49
45 TestAddons/parallel/Headlamp 14.02
46 TestAddons/parallel/CloudSpanner 5.55
47 TestAddons/parallel/LocalPath 9.18
48 TestAddons/parallel/NvidiaDevicePlugin 6.73
49 TestAddons/parallel/Yakd 6.01
53 TestAddons/serial/GCPAuth/Namespaces 0.11
55 TestCertOptions 45.99
56 TestCertExpiration 495.66
58 TestForceSystemdFlag 58.18
59 TestForceSystemdEnv 46.59
61 TestKVMDriverInstallOrUpdate 1.36
65 TestErrorSpam/setup 42.4
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.69
68 TestErrorSpam/pause 1.52
69 TestErrorSpam/unpause 1.56
70 TestErrorSpam/stop 5.22
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 59.62
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.6
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
82 TestFunctional/serial/CacheCmd/cache/add_local 1.02
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 32.91
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.38
93 TestFunctional/serial/LogsFileCmd 1.39
94 TestFunctional/serial/InvalidService 3.98
96 TestFunctional/parallel/ConfigCmd 0.31
97 TestFunctional/parallel/DashboardCmd 12.67
98 TestFunctional/parallel/DryRun 0.28
99 TestFunctional/parallel/InternationalLanguage 0.15
100 TestFunctional/parallel/StatusCmd 1.11
104 TestFunctional/parallel/ServiceCmdConnect 7.59
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 37.75
108 TestFunctional/parallel/SSHCmd 0.43
109 TestFunctional/parallel/CpCmd 1.31
110 TestFunctional/parallel/MySQL 21.68
111 TestFunctional/parallel/FileSync 0.24
112 TestFunctional/parallel/CertSync 1.45
116 TestFunctional/parallel/NodeLabels 0.08
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
120 TestFunctional/parallel/License 0.19
121 TestFunctional/parallel/ServiceCmd/DeployApp 10.25
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
123 TestFunctional/parallel/ProfileCmd/profile_list 0.29
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
125 TestFunctional/parallel/MountCmd/any-port 8.98
126 TestFunctional/parallel/MountCmd/specific-port 1.77
127 TestFunctional/parallel/ServiceCmd/List 0.56
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
130 TestFunctional/parallel/ServiceCmd/Format 0.44
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
132 TestFunctional/parallel/ServiceCmd/URL 0.56
142 TestFunctional/parallel/Version/short 0.05
143 TestFunctional/parallel/Version/components 0.86
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.44
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.45
148 TestFunctional/parallel/ImageCommands/ImageBuild 2.54
149 TestFunctional/parallel/ImageCommands/Setup 0.45
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.36
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.23
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.35
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.82
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6.04
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 202.66
167 TestMultiControlPlane/serial/DeployApp 5.99
168 TestMultiControlPlane/serial/PingHostFromPods 1.18
169 TestMultiControlPlane/serial/AddWorkerNode 56.03
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
172 TestMultiControlPlane/serial/CopyFile 12.62
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 206.87
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 72.15
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
188 TestJSONOutput/start/Command 96.63
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.72
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.62
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.4
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 88.31
220 TestMountStart/serial/StartWithMountFirst 24.27
221 TestMountStart/serial/VerifyMountFirst 0.35
222 TestMountStart/serial/StartWithMountSecond 24.12
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.68
225 TestMountStart/serial/VerifyMountPostDelete 0.36
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 23.02
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 120.02
232 TestMultiNode/serial/DeployApp2Nodes 3.79
233 TestMultiNode/serial/PingHostFrom2Pods 0.79
234 TestMultiNode/serial/AddNode 46.05
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.2
237 TestMultiNode/serial/CopyFile 6.91
238 TestMultiNode/serial/StopNode 2.25
239 TestMultiNode/serial/StartAfterStop 37.12
241 TestMultiNode/serial/DeleteNode 2.08
243 TestMultiNode/serial/RestartMultiNode 182.23
244 TestMultiNode/serial/ValidateNameConflict 44.64
251 TestScheduledStopUnix 112.92
255 TestRunningBinaryUpgrade 152.35
275 TestNetworkPlugins/group/false 2.78
280 TestPause/serial/Start 122.69
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
283 TestNoKubernetes/serial/StartWithK8s 44.6
284 TestPause/serial/SecondStartNoReconfiguration 49.18
285 TestNoKubernetes/serial/StartWithStopK8s 4.79
286 TestNoKubernetes/serial/Start 26.14
287 TestPause/serial/Pause 0.7
288 TestPause/serial/VerifyStatus 0.24
289 TestPause/serial/Unpause 0.61
290 TestPause/serial/PauseAgain 0.8
291 TestPause/serial/DeletePaused 0.79
292 TestPause/serial/VerifyDeletedResources 0.38
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
294 TestNoKubernetes/serial/ProfileList 1.02
295 TestNoKubernetes/serial/Stop 1.31
296 TestNoKubernetes/serial/StartNoArgs 39.75
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
298 TestStoppedBinaryUpgrade/Setup 0.42
299 TestStoppedBinaryUpgrade/Upgrade 135.27
302 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
304 TestStartStop/group/embed-certs/serial/FirstStart 57.56
305 TestStartStop/group/old-k8s-version/serial/Stop 6.33
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
308 TestStartStop/group/embed-certs/serial/DeployApp 9.32
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.68
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
317 TestStartStop/group/embed-certs/serial/SecondStart 603.88
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 546.33
321 TestStartStop/group/no-preload/serial/FirstStart 81.58
323 TestStartStop/group/no-preload/serial/DeployApp 7.3
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
327 TestStartStop/group/no-preload/serial/SecondStart 592.57
335 TestStartStop/group/newest-cni/serial/FirstStart 50.45
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
338 TestStartStop/group/newest-cni/serial/Stop 7.32
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/newest-cni/serial/SecondStart 36.91
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
344 TestStartStop/group/newest-cni/serial/Pause 3.52
345 TestNetworkPlugins/group/auto/Start 58.81
346 TestNetworkPlugins/group/auto/KubeletFlags 0.2
347 TestNetworkPlugins/group/auto/NetCatPod 11.2
348 TestNetworkPlugins/group/auto/DNS 0.15
349 TestNetworkPlugins/group/auto/Localhost 0.13
350 TestNetworkPlugins/group/auto/HairPin 0.14
351 TestNetworkPlugins/group/kindnet/Start 69.24
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
354 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
355 TestNetworkPlugins/group/kindnet/DNS 0.16
356 TestNetworkPlugins/group/kindnet/Localhost 0.12
357 TestNetworkPlugins/group/kindnet/HairPin 0.14
358 TestNetworkPlugins/group/calico/Start 80.1
359 TestNetworkPlugins/group/custom-flannel/Start 83.92
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/calico/KubeletFlags 0.21
362 TestNetworkPlugins/group/calico/NetCatPod 11.21
363 TestNetworkPlugins/group/calico/DNS 0.14
364 TestNetworkPlugins/group/calico/Localhost 0.16
365 TestNetworkPlugins/group/calico/HairPin 0.12
366 TestNetworkPlugins/group/enable-default-cni/Start 99.03
367 TestNetworkPlugins/group/flannel/Start 96.77
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.21
370 TestNetworkPlugins/group/custom-flannel/DNS 0.18
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
374 TestNetworkPlugins/group/bridge/Start 63.3
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
379 TestNetworkPlugins/group/flannel/NetCatPod 10.23
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
384 TestNetworkPlugins/group/bridge/NetCatPod 11.21
385 TestNetworkPlugins/group/flannel/DNS 0.18
386 TestNetworkPlugins/group/flannel/Localhost 0.15
387 TestNetworkPlugins/group/flannel/HairPin 0.14
388 TestNetworkPlugins/group/bridge/DNS 0.18
389 TestNetworkPlugins/group/bridge/Localhost 0.16
390 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (11.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-407804 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-407804 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.876377986s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-407804
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-407804: exit status 85 (61.926205ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-407804 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |          |
	|         | -p download-only-407804        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:29.268815   20080 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:29.269042   20080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:29.269050   20080 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:29.269055   20080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:29.269217   20080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	W0717 00:04:29.269345   20080 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19265-12897/.minikube/config/config.json: open /home/jenkins/minikube-integration/19265-12897/.minikube/config/config.json: no such file or directory
	I0717 00:04:29.269908   20080 out.go:298] Setting JSON to true
	I0717 00:04:29.270757   20080 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2818,"bootTime":1721171851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:29.270817   20080 start.go:139] virtualization: kvm guest
	I0717 00:04:29.273263   20080 out.go:97] [download-only-407804] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0717 00:04:29.273398   20080 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 00:04:29.273465   20080 notify.go:220] Checking for updates...
	I0717 00:04:29.274732   20080 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:04:29.276172   20080 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:29.277658   20080 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:04:29.279066   20080 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:29.280396   20080 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 00:04:29.283351   20080 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:04:29.283615   20080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:29.380971   20080 out.go:97] Using the kvm2 driver based on user configuration
	I0717 00:04:29.381007   20080 start.go:297] selected driver: kvm2
	I0717 00:04:29.381024   20080 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:04:29.381383   20080 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:04:29.381508   20080 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:04:29.396723   20080 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:04:29.396783   20080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:29.397252   20080 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 00:04:29.397415   20080 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:04:29.397442   20080 cni.go:84] Creating CNI manager for ""
	I0717 00:04:29.397452   20080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:04:29.397464   20080 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:29.397532   20080 start.go:340] cluster config:
	{Name:download-only-407804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-407804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:29.397721   20080 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:04:29.399656   20080 out.go:97] Downloading VM boot image ...
	I0717 00:04:29.399711   20080 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/iso/amd64/minikube-v1.33.1-1721037971-19249-amd64.iso
	I0717 00:04:34.641631   20080 out.go:97] Starting "download-only-407804" primary control-plane node in "download-only-407804" cluster
	I0717 00:04:34.641660   20080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:04:34.666775   20080 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:34.666805   20080 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:34.666930   20080 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:04:34.668787   20080 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 00:04:34.668811   20080 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:34.691155   20080 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-407804 host does not exist
	  To start a cluster, run: "minikube start -p download-only-407804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-407804
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (6.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-020346 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-020346 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.247227184s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (6.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-020346
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-020346: exit status 85 (66.486383ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-407804 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-407804        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-407804        | download-only-407804 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | -o=json --download-only        | download-only-020346 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-020346        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:41.478696   20292 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:41.478840   20292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:41.478849   20292 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:41.478856   20292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:41.479069   20292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:04:41.479618   20292 out.go:298] Setting JSON to true
	I0717 00:04:41.480462   20292 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2830,"bootTime":1721171851,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:41.480523   20292 start.go:139] virtualization: kvm guest
	I0717 00:04:41.482490   20292 out.go:97] [download-only-020346] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:04:41.482666   20292 notify.go:220] Checking for updates...
	I0717 00:04:41.483919   20292 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:04:41.485276   20292 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:41.486392   20292 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:04:41.487546   20292 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:41.488746   20292 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 00:04:41.490986   20292 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:04:41.491186   20292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:41.523454   20292 out.go:97] Using the kvm2 driver based on user configuration
	I0717 00:04:41.523482   20292 start.go:297] selected driver: kvm2
	I0717 00:04:41.523497   20292 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:04:41.523867   20292 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:04:41.523950   20292 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19265-12897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:04:41.540028   20292 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:04:41.540082   20292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:41.540804   20292 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 00:04:41.541096   20292 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:04:41.541133   20292 cni.go:84] Creating CNI manager for ""
	I0717 00:04:41.541143   20292 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:04:41.541155   20292 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:41.541221   20292 start.go:340] cluster config:
	{Name:download-only-020346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-020346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:41.541349   20292 iso.go:125] acquiring lock: {Name:mk54905fcd116c44dea86fc2fb31112b49cf1464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:04:41.543134   20292 out.go:97] Starting "download-only-020346" primary control-plane node in "download-only-020346" cluster
	I0717 00:04:41.543161   20292 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:41.569661   20292 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:41.569698   20292 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:41.569860   20292 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:41.571576   20292 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 00:04:41.571591   20292 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:41.603535   20292 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:cd14409e225276132db5cf7d5d75c2d2 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:45.207594   20292 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:45.207688   20292 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19265-12897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:46.064575   20292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:04:46.064896   20292 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/download-only-020346/config.json ...
	I0717 00:04:46.064922   20292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/download-only-020346/config.json: {Name:mk4d61e09b11d9e248a3d80964a9808d82ef140a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:04:46.065061   20292 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:46.065182   20292 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19265-12897/.minikube/cache/linux/amd64/v1.30.2/kubectl
	
	
	* The control-plane node download-only-020346 host does not exist
	  To start a cluster, run: "minikube start -p download-only-020346"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-020346
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (4.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-375038 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-375038 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.479492465s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (4.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-375038
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-375038: exit status 85 (57.589052ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-407804 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-407804             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-407804             | download-only-407804 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | -o=json --download-only             | download-only-020346 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-020346             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-020346             | download-only-020346 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | -o=json --download-only             | download-only-375038 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-375038             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:48.062891   20496 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:48.062984   20496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:48.062991   20496 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:48.062995   20496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:48.063210   20496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:04:48.063740   20496 out.go:298] Setting JSON to true
	I0717 00:04:48.064553   20496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2837,"bootTime":1721171851,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:48.064647   20496 start.go:139] virtualization: kvm guest
	I0717 00:04:48.066514   20496 out.go:97] [download-only-375038] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:04:48.066665   20496 notify.go:220] Checking for updates...
	I0717 00:04:48.067839   20496 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:04:48.069061   20496 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:48.070296   20496 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:04:48.071642   20496 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:04:48.072995   20496 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-375038 host does not exist
	  To start a cluster, run: "minikube start -p download-only-375038"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-375038
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-998982 --alsologtostderr --binary-mirror http://127.0.0.1:46519 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-998982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-998982
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (124.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-722462 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-722462 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m3.576600554s)
helpers_test.go:175: Cleaning up "offline-crio-722462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-722462
--- PASS: TestOffline (124.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-860537
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-860537: exit status 85 (54.513816ms)

                                                
                                                
-- stdout --
	* Profile "addons-860537" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-860537"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-860537
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-860537: exit status 85 (52.19338ms)

                                                
                                                
-- stdout --
	* Profile "addons-860537" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-860537"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (138.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-860537 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-860537 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m18.591586896s)
--- PASS: TestAddons/Setup (138.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (95.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 22.075686ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-v6n4c" [66c9585d-752a-4ad2-9c99-b9bff568c44d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005158673s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vpbzw" [961d65cb-7faf-4f3a-86ef-8916920fcba6] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004765781s
addons_test.go:342: (dbg) Run:  kubectl --context addons-860537 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-860537 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-860537 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.716503992s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 ip
2024/07/17 00:07:27 [DEBUG] GET http://192.168.39.251:5000
2024/07/17 00:07:27 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:27 [DEBUG] GET http://192.168.39.251:5000: retrying in 1s (4 left)
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (95.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nlbvv" [562da503-ec2f-4129-8cf9-b4ac45a498ec] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005209297s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-860537
2024/07/17 00:07:42 [DEBUG] GET http://192.168.39.251:5000
2024/07/17 00:07:42 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:42 [DEBUG] GET http://192.168.39.251:5000: retrying in 1s (4 left)
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-860537: (5.751840765s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.07s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.954893ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-5nxgc" [77b4eedd-c82b-401f-9057-a7a11b13510b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004184867s
addons_test.go:475: (dbg) Run:  kubectl --context addons-860537 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-860537 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.440774801s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.302537ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-860537 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/07/17 00:07:42 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/07/17 00:07:43 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:43 [DEBUG] GET http://192.168.39.251:5000: retrying in 2s (3 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/07/17 00:07:45 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:45 [DEBUG] GET http://192.168.39.251:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/07/17 00:07:49 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:49 [DEBUG] GET http://192.168.39.251:5000: retrying in 8s (1 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-860537 create -f testdata/csi-hostpath-driver/pv-pod.yaml
2024/07/17 00:07:57 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3f198f2d-3d70-4062-a9bd-449a5cd12f56] Pending
2024/07/17 00:07:58 [DEBUG] GET http://192.168.39.251:5000
2024/07/17 00:07:58 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:58 [DEBUG] GET http://192.168.39.251:5000: retrying in 1s (4 left)
helpers_test.go:344: "task-pv-pod" [3f198f2d-3d70-4062-a9bd-449a5cd12f56] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3f198f2d-3d70-4062-a9bd-449a5cd12f56] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004394907s
addons_test.go:586: (dbg) Run:  kubectl --context addons-860537 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-860537 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-860537 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-860537 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-860537 delete pod task-pv-pod: (1.04878851s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-860537 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-860537 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/07/17 00:08:13 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/07/17 00:08:15 [DEBUG] GET http://192.168.39.251:5000
2024/07/17 00:08:15 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:15 [DEBUG] GET http://192.168.39.251:5000: retrying in 1s (4 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/07/17 00:08:16 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:16 [DEBUG] GET http://192.168.39.251:5000: retrying in 2s (3 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/07/17 00:08:18 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:18 [DEBUG] GET http://192.168.39.251:5000: retrying in 4s (2 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-860537 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [46090a4f-9a13-4a15-a114-8d5526914358] Pending
helpers_test.go:344: "task-pv-pod-restore" [46090a4f-9a13-4a15-a114-8d5526914358] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [46090a4f-9a13-4a15-a114-8d5526914358] Running
2024/07/17 00:08:22 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:22 [DEBUG] GET http://192.168.39.251:5000: retrying in 8s (1 left)
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003899718s
addons_test.go:628: (dbg) Run:  kubectl --context addons-860537 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-860537 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-860537 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 addons disable csi-hostpath-driver --alsologtostderr -v=1
2024/07/17 00:08:30 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:31 [DEBUG] GET http://192.168.39.251:5000
2024/07/17 00:08:31 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:31 [DEBUG] GET http://192.168.39.251:5000: retrying in 1s (4 left)
2024/07/17 00:08:32 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:32 [DEBUG] GET http://192.168.39.251:5000: retrying in 2s (3 left)
2024/07/17 00:08:34 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:08:34 [DEBUG] GET http://192.168.39.251:5000: retrying in 4s (2 left)
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-860537 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.779247327s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-860537 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-860537 --alsologtostderr -v=1: (1.011100028s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-rw54z" [22484240-e20c-4ef5-a0da-50269ed47664] Pending
helpers_test.go:344: "headlamp-7867546754-rw54z" [22484240-e20c-4ef5-a0da-50269ed47664] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-rw54z" [22484240-e20c-4ef5-a0da-50269ed47664] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004096852s
--- PASS: TestAddons/parallel/Headlamp (14.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-s4kn8" [39de3d49-f999-4a22-bf05-b3ffa32fa270] Running
2024/07/17 00:07:34 [ERR] GET http://192.168.39.251:5000 request failed: Get "http://192.168.39.251:5000": dial tcp 192.168.39.251:5000: connect: connection refused
2024/07/17 00:07:34 [DEBUG] GET http://192.168.39.251:5000: retrying in 8s (1 left)
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003720663s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-860537
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-860537 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-860537 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860537 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1b521423-a9c2-42f3-b13c-3aace93584f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1b521423-a9c2-42f3-b13c-3aace93584f5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1b521423-a9c2-42f3-b13c-3aace93584f5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004045406s
addons_test.go:992: (dbg) Run:  kubectl --context addons-860537 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 ssh "cat /opt/local-path-provisioner/pvc-52a7cdd9-a848-453e-a1d0-34493d73230f_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-860537 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-860537 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-860537 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pcbjh" [631d74e8-bdf2-43b3-b053-cdcade929069] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005492695s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-860537
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-h6wwn" [4e01edaf-fd5a-4055-adc7-3814ccc74e83] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005856583s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-860537 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-860537 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (45.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-514901 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0717 01:45:41.399824   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-514901 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.720684988s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-514901 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-514901 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-514901 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-514901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-514901
--- PASS: TestCertOptions (45.99s)

                                                
                                    
x
+
TestCertExpiration (495.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-838524 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-838524 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.604008033s)
E0717 01:22:12.451172   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-838524 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0717 01:24:18.738620   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-838524 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (4m31.170528261s)
helpers_test.go:175: Cleaning up "cert-expiration-838524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-838524
--- PASS: TestCertExpiration (495.66s)

                                                
                                    
x
+
TestForceSystemdFlag (58.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-804874 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-804874 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.16954081s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-804874 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-804874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-804874
--- PASS: TestForceSystemdFlag (58.18s)

                                                
                                    
x
+
TestForceSystemdEnv (46.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-820894 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-820894 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.810230286s)
helpers_test.go:175: Cleaning up "force-systemd-env-820894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-820894
--- PASS: TestForceSystemdEnv (46.59s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.36s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.36s)

                                                
                                    
x
+
TestErrorSpam/setup (42.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-901293 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-901293 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-901293 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-901293 --driver=kvm2  --container-runtime=crio: (42.40289333s)
--- PASS: TestErrorSpam/setup (42.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (5.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 stop: (2.262900562s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 stop: (1.703497349s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-901293 --log_dir /tmp/nospam-901293 stop: (1.250128933s)
--- PASS: TestErrorSpam/stop (5.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19265-12897/.minikube/files/etc/test/nested/copy/20068/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598951 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0717 00:17:12.450692   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:12.456386   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:12.466603   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:12.486936   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:12.527286   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:12.607653   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:12.768101   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:13.088705   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:13.729661   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:15.010144   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:17.570717   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:22.691553   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:32.932736   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:17:53.413776   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-598951 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.623026087s)
--- PASS: TestFunctional/serial/StartWithProxy (59.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598951 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-598951 --alsologtostderr -v=8: (35.602080471s)
functional_test.go:659: soft start took 35.602754785s for "functional-598951" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-598951 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cache add registry.k8s.io/pause:3.3
E0717 00:18:34.374887   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-598951 cache add registry.k8s.io/pause:3.3: (1.134240377s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-598951 cache add registry.k8s.io/pause:latest: (1.040962974s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-598951 /tmp/TestFunctionalserialCacheCmdcacheadd_local4179413060/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cache add minikube-local-cache-test:functional-598951
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cache delete minikube-local-cache-test:functional-598951
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-598951
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.064588ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 kubectl -- --context functional-598951 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-598951 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598951 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-598951 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.910911002s)
functional_test.go:757: restart took 32.911021145s for "functional-598951" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-598951 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-598951 logs: (1.380572501s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 logs --file /tmp/TestFunctionalserialLogsFileCmd829422725/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-598951 logs --file /tmp/TestFunctionalserialLogsFileCmd829422725/001/logs.txt: (1.392206683s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-598951 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-598951
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-598951: exit status 115 (263.906179ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.142:30195 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-598951 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 config get cpus: exit status 14 (51.020545ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 config get cpus: exit status 14 (40.946032ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-598951 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-598951 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28758: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598951 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-598951 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.601614ms)

                                                
                                                
-- stdout --
	* [functional-598951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:19:21.342702   28612 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:19:21.343008   28612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:19:21.343019   28612 out.go:304] Setting ErrFile to fd 2...
	I0717 00:19:21.343026   28612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:19:21.343590   28612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:19:21.344388   28612 out.go:298] Setting JSON to false
	I0717 00:19:21.345304   28612 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3710,"bootTime":1721171851,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:19:21.345362   28612 start.go:139] virtualization: kvm guest
	I0717 00:19:21.347474   28612 out.go:177] * [functional-598951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:19:21.348871   28612 notify.go:220] Checking for updates...
	I0717 00:19:21.348881   28612 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:19:21.350235   28612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:19:21.351483   28612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:19:21.352706   28612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:19:21.353748   28612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:19:21.354961   28612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:19:21.356724   28612 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:19:21.357209   28612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:19:21.357260   28612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:19:21.374161   28612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0717 00:19:21.374536   28612 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:19:21.375097   28612 main.go:141] libmachine: Using API Version  1
	I0717 00:19:21.375118   28612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:19:21.375438   28612 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:19:21.375625   28612 main.go:141] libmachine: (functional-598951) Calling .DriverName
	I0717 00:19:21.375844   28612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:19:21.376167   28612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:19:21.376208   28612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:19:21.392535   28612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I0717 00:19:21.393024   28612 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:19:21.393540   28612 main.go:141] libmachine: Using API Version  1
	I0717 00:19:21.393556   28612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:19:21.394067   28612 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:19:21.394264   28612 main.go:141] libmachine: (functional-598951) Calling .DriverName
	I0717 00:19:21.429015   28612 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:19:21.430534   28612 start.go:297] selected driver: kvm2
	I0717 00:19:21.430556   28612 start.go:901] validating driver "kvm2" against &{Name:functional-598951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-598951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:19:21.430692   28612 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:19:21.433265   28612 out.go:177] 
	W0717 00:19:21.434625   28612 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 00:19:21.435786   28612 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598951 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-598951 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-598951 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.798343ms)

                                                
                                                
-- stdout --
	* [functional-598951] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:19:21.205820   28567 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:19:21.205928   28567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:19:21.205939   28567 out.go:304] Setting ErrFile to fd 2...
	I0717 00:19:21.205944   28567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:19:21.206281   28567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:19:21.206837   28567 out.go:298] Setting JSON to false
	I0717 00:19:21.207815   28567 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3710,"bootTime":1721171851,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:19:21.207872   28567 start.go:139] virtualization: kvm guest
	I0717 00:19:21.210686   28567 out.go:177] * [functional-598951] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0717 00:19:21.211775   28567 notify.go:220] Checking for updates...
	I0717 00:19:21.213322   28567 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:19:21.214452   28567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:19:21.215569   28567 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 00:19:21.216818   28567 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 00:19:21.218088   28567 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:19:21.219384   28567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:19:21.221515   28567 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:19:21.222191   28567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:19:21.222300   28567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:19:21.238637   28567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0717 00:19:21.239043   28567 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:19:21.239655   28567 main.go:141] libmachine: Using API Version  1
	I0717 00:19:21.239680   28567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:19:21.240010   28567 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:19:21.240189   28567 main.go:141] libmachine: (functional-598951) Calling .DriverName
	I0717 00:19:21.240450   28567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:19:21.240775   28567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:19:21.240812   28567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:19:21.257065   28567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35503
	I0717 00:19:21.257498   28567 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:19:21.257964   28567 main.go:141] libmachine: Using API Version  1
	I0717 00:19:21.257989   28567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:19:21.258297   28567 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:19:21.258491   28567 main.go:141] libmachine: (functional-598951) Calling .DriverName
	I0717 00:19:21.291317   28567 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0717 00:19:21.292773   28567 start.go:297] selected driver: kvm2
	I0717 00:19:21.292792   28567 start.go:901] validating driver "kvm2" against &{Name:functional-598951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19249/minikube-v1.33.1-1721037971-19249-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-598951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:19:21.292938   28567 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:19:21.295316   28567 out.go:177] 
	W0717 00:19:21.296880   28567 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 00:19:21.298261   28567 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-598951 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-598951 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-kmlb9" [d13f7d90-5406-44d3-8d47-9f0088fafb69] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-kmlb9" [d13f7d90-5406-44d3-8d47-9f0088fafb69] Running
2024/07/17 00:19:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005563934s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.142:31067
functional_test.go:1671: http://192.168.39.142:31067: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-kmlb9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.142:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.142:31067
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0dd6b0ce-54c6-4c70-8dbb-2c71aa4cd853] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003874673s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-598951 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-598951 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-598951 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-598951 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e064899c-0b89-46ad-86bb-ba6a29d4c532] Pending
helpers_test.go:344: "sp-pod" [e064899c-0b89-46ad-86bb-ba6a29d4c532] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e064899c-0b89-46ad-86bb-ba6a29d4c532] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004947012s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-598951 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-598951 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-598951 delete -f testdata/storage-provisioner/pod.yaml: (1.965669256s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-598951 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ccec8791-c89e-4464-a604-ff8e1e6cdab3] Pending
helpers_test.go:344: "sp-pod" [ccec8791-c89e-4464-a604-ff8e1e6cdab3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ccec8791-c89e-4464-a604-ff8e1e6cdab3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003881155s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-598951 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh -n functional-598951 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cp functional-598951:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd812594588/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh -n functional-598951 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh -n functional-598951 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-598951 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-27wk4" [71c4c3f9-1959-4038-801c-188163faa82a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-27wk4" [71c4c3f9-1959-4038-801c-188163faa82a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.00458095s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-598951 exec mysql-64454c8b5c-27wk4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.68s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/20068/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo cat /etc/test/nested/copy/20068/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/20068.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo cat /etc/ssl/certs/20068.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/20068.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo cat /usr/share/ca-certificates/20068.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/200682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo cat /etc/ssl/certs/200682.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/200682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo cat /usr/share/ca-certificates/200682.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-598951 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 ssh "sudo systemctl is-active docker": exit status 1 (244.629854ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 ssh "sudo systemctl is-active containerd": exit status 1 (221.620135ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-598951 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-598951 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-rwmqd" [d7ebc955-c8f9-4657-a305-d735b63ed313] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-rwmqd" [d7ebc955-c8f9-4657-a305-d735b63ed313] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003926119s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "238.763936ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "53.260772ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "245.726734ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "47.933059ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdany-port2793317821/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721175559945211858" to /tmp/TestFunctionalparallelMountCmdany-port2793317821/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721175559945211858" to /tmp/TestFunctionalparallelMountCmdany-port2793317821/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721175559945211858" to /tmp/TestFunctionalparallelMountCmdany-port2793317821/001/test-1721175559945211858
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.544095ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 00:19 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 00:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 00:19 test-1721175559945211858
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh cat /mount-9p/test-1721175559945211858
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-598951 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [81b2a37c-86f5-4ea8-a2b2-c092ebff2a12] Pending
helpers_test.go:344: "busybox-mount" [81b2a37c-86f5-4ea8-a2b2-c092ebff2a12] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [81b2a37c-86f5-4ea8-a2b2-c092ebff2a12] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [81b2a37c-86f5-4ea8-a2b2-c092ebff2a12] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004514073s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-598951 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdany-port2793317821/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdspecific-port1001669093/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.104948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdspecific-port1001669093/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 ssh "sudo umount -f /mount-9p": exit status 1 (257.319021ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-598951 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdspecific-port1001669093/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 service list -o json
functional_test.go:1490: Took "512.397424ms" to run "out/minikube-linux-amd64 -p functional-598951 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.142:31139
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605652289/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605652289/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605652289/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T" /mount1: exit status 1 (290.413873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-598951 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605652289/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605652289/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-598951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605652289/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.142:31139
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598951 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-598951
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kicbase/echo-server:functional-598951
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598951 image ls --format short --alsologtostderr:
I0717 00:19:50.125454   30477 out.go:291] Setting OutFile to fd 1 ...
I0717 00:19:50.125598   30477 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.125610   30477 out.go:304] Setting ErrFile to fd 2...
I0717 00:19:50.125616   30477 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.125820   30477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
I0717 00:19:50.126837   30477 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.127044   30477 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.127952   30477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.128003   30477 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.142733   30477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
I0717 00:19:50.143269   30477 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.143872   30477 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.143897   30477 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.144288   30477 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.144460   30477 main.go:141] libmachine: (functional-598951) Calling .GetState
I0717 00:19:50.146331   30477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.146365   30477 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.165388   30477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
I0717 00:19:50.165839   30477 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.166470   30477 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.166488   30477 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.166814   30477 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.167129   30477 main.go:141] libmachine: (functional-598951) Calling .DriverName
I0717 00:19:50.167318   30477 ssh_runner.go:195] Run: systemctl --version
I0717 00:19:50.167345   30477 main.go:141] libmachine: (functional-598951) Calling .GetSSHHostname
I0717 00:19:50.170588   30477 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.170992   30477 main.go:141] libmachine: (functional-598951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:10:fe", ip: ""} in network mk-functional-598951: {Iface:virbr1 ExpiryTime:2024-07-17 01:17:11 +0000 UTC Type:0 Mac:52:54:00:78:10:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-598951 Clientid:01:52:54:00:78:10:fe}
I0717 00:19:50.171017   30477 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined IP address 192.168.39.142 and MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.171166   30477 main.go:141] libmachine: (functional-598951) Calling .GetSSHPort
I0717 00:19:50.171309   30477 main.go:141] libmachine: (functional-598951) Calling .GetSSHKeyPath
I0717 00:19:50.171447   30477 main.go:141] libmachine: (functional-598951) Calling .GetSSHUsername
I0717 00:19:50.171565   30477 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/functional-598951/id_rsa Username:docker}
I0717 00:19:50.286929   30477 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 00:19:50.456122   30477 main.go:141] libmachine: Making call to close driver server
I0717 00:19:50.456138   30477 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:50.456421   30477 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:50.456442   30477 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:19:50.456460   30477 main.go:141] libmachine: Making call to close driver server
I0717 00:19:50.456468   30477 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:50.456705   30477 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:50.456722   30477 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598951 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| docker.io/library/nginx                 | latest             | fffffc90d343c | 192MB  |
| localhost/minikube-local-cache-test     | functional-598951  | 14351c30346e6 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kicbase/echo-server           | functional-598951  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598951 image ls --format table --alsologtostderr:
I0717 00:19:50.859896   30595 out.go:291] Setting OutFile to fd 1 ...
I0717 00:19:50.860014   30595 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.860024   30595 out.go:304] Setting ErrFile to fd 2...
I0717 00:19:50.860030   30595 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.860333   30595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
I0717 00:19:50.860902   30595 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.861002   30595 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.861360   30595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.861402   30595 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.876412   30595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45575
I0717 00:19:50.876940   30595 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.877506   30595 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.877527   30595 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.877906   30595 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.878117   30595 main.go:141] libmachine: (functional-598951) Calling .GetState
I0717 00:19:50.880104   30595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.880153   30595 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.894631   30595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
I0717 00:19:50.895082   30595 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.895596   30595 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.895625   30595 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.895984   30595 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.896222   30595 main.go:141] libmachine: (functional-598951) Calling .DriverName
I0717 00:19:50.896434   30595 ssh_runner.go:195] Run: systemctl --version
I0717 00:19:50.896460   30595 main.go:141] libmachine: (functional-598951) Calling .GetSSHHostname
I0717 00:19:50.899942   30595 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.900409   30595 main.go:141] libmachine: (functional-598951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:10:fe", ip: ""} in network mk-functional-598951: {Iface:virbr1 ExpiryTime:2024-07-17 01:17:11 +0000 UTC Type:0 Mac:52:54:00:78:10:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-598951 Clientid:01:52:54:00:78:10:fe}
I0717 00:19:50.900439   30595 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined IP address 192.168.39.142 and MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.900658   30595 main.go:141] libmachine: (functional-598951) Calling .GetSSHPort
I0717 00:19:50.900835   30595 main.go:141] libmachine: (functional-598951) Calling .GetSSHKeyPath
I0717 00:19:50.900970   30595 main.go:141] libmachine: (functional-598951) Calling .GetSSHUsername
I0717 00:19:50.901165   30595 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/functional-598951/id_rsa Username:docker}
I0717 00:19:50.999290   30595 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 00:19:51.056381   30595 main.go:141] libmachine: Making call to close driver server
I0717 00:19:51.056406   30595 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:51.056835   30595 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:51.056855   30595 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:19:51.056872   30595 main.go:141] libmachine: Making call to close driver server
I0717 00:19:51.056881   30595 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:51.057121   30595 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:51.057146   30595 main.go:141] libmachine: (functional-598951) DBG | Closing plugin on server side
I0717 00:19:51.057148   30595 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598951 image ls --format json --alsologtostderr:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1fa
aac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/ku
be-proxy:v1.30.2"],"size":"85953433"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0
a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-598951"],"size":"4943877"},{"id":"115053965e86b2df4d78af78d795
1b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@s
ha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244"],"repoTags":["docker.io/library/nginx:latest"],"size":"191746190"},{"id":"14351c30346e60b800e09a1de5aca8a935bdbe43652c69a7d693025be9cb7e22","repoDigests":["localhost/minikube-local-cache-test@sha256:4b08bc13f12304b9cf8847c303da80c33ad45ea8fc380b0be87c3334658bcb30"],"repoTags":["localhost/minikube-local-cache-test:functional-598951"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69e
fbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598951 image ls --format json --alsologtostderr:
I0717 00:19:50.573524   30529 out.go:291] Setting OutFile to fd 1 ...
I0717 00:19:50.573795   30529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.573818   30529 out.go:304] Setting ErrFile to fd 2...
I0717 00:19:50.573829   30529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.573999   30529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
I0717 00:19:50.574624   30529 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.574754   30529 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.575206   30529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.575271   30529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.590180   30529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
I0717 00:19:50.590648   30529 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.591157   30529 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.591171   30529 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.591549   30529 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.591710   30529 main.go:141] libmachine: (functional-598951) Calling .GetState
I0717 00:19:50.593310   30529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.593360   30529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.608900   30529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46853
I0717 00:19:50.609306   30529 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.609785   30529 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.609802   30529 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.610182   30529 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.610377   30529 main.go:141] libmachine: (functional-598951) Calling .DriverName
I0717 00:19:50.610589   30529 ssh_runner.go:195] Run: systemctl --version
I0717 00:19:50.610609   30529 main.go:141] libmachine: (functional-598951) Calling .GetSSHHostname
I0717 00:19:50.614018   30529 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.614361   30529 main.go:141] libmachine: (functional-598951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:10:fe", ip: ""} in network mk-functional-598951: {Iface:virbr1 ExpiryTime:2024-07-17 01:17:11 +0000 UTC Type:0 Mac:52:54:00:78:10:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-598951 Clientid:01:52:54:00:78:10:fe}
I0717 00:19:50.614387   30529 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined IP address 192.168.39.142 and MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.614596   30529 main.go:141] libmachine: (functional-598951) Calling .GetSSHPort
I0717 00:19:50.614754   30529 main.go:141] libmachine: (functional-598951) Calling .GetSSHKeyPath
I0717 00:19:50.614881   30529 main.go:141] libmachine: (functional-598951) Calling .GetSSHUsername
I0717 00:19:50.615011   30529 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/functional-598951/id_rsa Username:docker}
I0717 00:19:50.695593   30529 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 00:19:50.754141   30529 main.go:141] libmachine: Making call to close driver server
I0717 00:19:50.754164   30529 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:50.754421   30529 main.go:141] libmachine: (functional-598951) DBG | Closing plugin on server side
I0717 00:19:50.754449   30529 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:50.754465   30529 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:19:50.754478   30529 main.go:141] libmachine: Making call to close driver server
I0717 00:19:50.754487   30529 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:50.754704   30529 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:50.754721   30529 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598951 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-598951
size: "4943877"
- id: 14351c30346e60b800e09a1de5aca8a935bdbe43652c69a7d693025be9cb7e22
repoDigests:
- localhost/minikube-local-cache-test@sha256:4b08bc13f12304b9cf8847c303da80c33ad45ea8fc380b0be87c3334658bcb30
repoTags:
- localhost/minikube-local-cache-test:functional-598951
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244
repoTags:
- docker.io/library/nginx:latest
size: "191746190"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598951 image ls --format yaml --alsologtostderr:
I0717 00:19:50.130806   30478 out.go:291] Setting OutFile to fd 1 ...
I0717 00:19:50.131547   30478 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.131563   30478 out.go:304] Setting ErrFile to fd 2...
I0717 00:19:50.131570   30478 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.132090   30478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
I0717 00:19:50.133811   30478 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.133967   30478 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.134389   30478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.134441   30478 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.149266   30478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
I0717 00:19:50.149749   30478 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.150277   30478 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.150301   30478 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.150592   30478 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.150799   30478 main.go:141] libmachine: (functional-598951) Calling .GetState
I0717 00:19:50.152482   30478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.152570   30478 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.166407   30478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
I0717 00:19:50.166793   30478 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.167329   30478 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.167344   30478 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.167683   30478 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.167878   30478 main.go:141] libmachine: (functional-598951) Calling .DriverName
I0717 00:19:50.168246   30478 ssh_runner.go:195] Run: systemctl --version
I0717 00:19:50.168270   30478 main.go:141] libmachine: (functional-598951) Calling .GetSSHHostname
I0717 00:19:50.171487   30478 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.171925   30478 main.go:141] libmachine: (functional-598951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:10:fe", ip: ""} in network mk-functional-598951: {Iface:virbr1 ExpiryTime:2024-07-17 01:17:11 +0000 UTC Type:0 Mac:52:54:00:78:10:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-598951 Clientid:01:52:54:00:78:10:fe}
I0717 00:19:50.171979   30478 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined IP address 192.168.39.142 and MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.172173   30478 main.go:141] libmachine: (functional-598951) Calling .GetSSHPort
I0717 00:19:50.172302   30478 main.go:141] libmachine: (functional-598951) Calling .GetSSHKeyPath
I0717 00:19:50.172456   30478 main.go:141] libmachine: (functional-598951) Calling .GetSSHUsername
I0717 00:19:50.172579   30478 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/functional-598951/id_rsa Username:docker}
I0717 00:19:50.284517   30478 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 00:19:50.483734   30478 main.go:141] libmachine: Making call to close driver server
I0717 00:19:50.483752   30478 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:50.484098   30478 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:50.484115   30478 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:19:50.484116   30478 main.go:141] libmachine: (functional-598951) DBG | Closing plugin on server side
I0717 00:19:50.484126   30478 main.go:141] libmachine: Making call to close driver server
I0717 00:19:50.484135   30478 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:50.484409   30478 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:50.484437   30478 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-598951 ssh pgrep buildkitd: exit status 1 (222.86197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image build -t localhost/my-image:functional-598951 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-598951 image build -t localhost/my-image:functional-598951 testdata/build --alsologtostderr: (2.106787025s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-598951 image build -t localhost/my-image:functional-598951 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3b39b41c072
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-598951
--> 67aba02cbc9
Successfully tagged localhost/my-image:functional-598951
67aba02cbc9f150fa0fd3adf04f10990c28201350b2f6b51f38b3b640989a552
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-598951 image build -t localhost/my-image:functional-598951 testdata/build --alsologtostderr:
I0717 00:19:50.787102   30576 out.go:291] Setting OutFile to fd 1 ...
I0717 00:19:50.787292   30576 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.787303   30576 out.go:304] Setting ErrFile to fd 2...
I0717 00:19:50.787309   30576 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:19:50.787600   30576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
I0717 00:19:50.788332   30576 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.789083   30576 config.go:182] Loaded profile config "functional-598951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:19:50.789600   30576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.789686   30576 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.804171   30576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
I0717 00:19:50.804719   30576 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.805310   30576 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.805337   30576 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.805711   30576 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.805915   30576 main.go:141] libmachine: (functional-598951) Calling .GetState
I0717 00:19:50.807715   30576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:19:50.807756   30576 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:19:50.824094   30576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
I0717 00:19:50.824525   30576 main.go:141] libmachine: () Calling .GetVersion
I0717 00:19:50.825035   30576 main.go:141] libmachine: Using API Version  1
I0717 00:19:50.825063   30576 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:19:50.825407   30576 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:19:50.825601   30576 main.go:141] libmachine: (functional-598951) Calling .DriverName
I0717 00:19:50.825847   30576 ssh_runner.go:195] Run: systemctl --version
I0717 00:19:50.825877   30576 main.go:141] libmachine: (functional-598951) Calling .GetSSHHostname
I0717 00:19:50.829007   30576 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.829463   30576 main.go:141] libmachine: (functional-598951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:10:fe", ip: ""} in network mk-functional-598951: {Iface:virbr1 ExpiryTime:2024-07-17 01:17:11 +0000 UTC Type:0 Mac:52:54:00:78:10:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-598951 Clientid:01:52:54:00:78:10:fe}
I0717 00:19:50.829495   30576 main.go:141] libmachine: (functional-598951) DBG | domain functional-598951 has defined IP address 192.168.39.142 and MAC address 52:54:00:78:10:fe in network mk-functional-598951
I0717 00:19:50.829804   30576 main.go:141] libmachine: (functional-598951) Calling .GetSSHPort
I0717 00:19:50.829980   30576 main.go:141] libmachine: (functional-598951) Calling .GetSSHKeyPath
I0717 00:19:50.830114   30576 main.go:141] libmachine: (functional-598951) Calling .GetSSHUsername
I0717 00:19:50.830286   30576 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/functional-598951/id_rsa Username:docker}
I0717 00:19:50.932798   30576 build_images.go:161] Building image from path: /tmp/build.2560717113.tar
I0717 00:19:50.932859   30576 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 00:19:50.945871   30576 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2560717113.tar
I0717 00:19:50.951945   30576 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2560717113.tar: stat -c "%s %y" /var/lib/minikube/build/build.2560717113.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2560717113.tar': No such file or directory
I0717 00:19:50.951980   30576 ssh_runner.go:362] scp /tmp/build.2560717113.tar --> /var/lib/minikube/build/build.2560717113.tar (3072 bytes)
I0717 00:19:50.992778   30576 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2560717113
I0717 00:19:51.014163   30576 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2560717113 -xf /var/lib/minikube/build/build.2560717113.tar
I0717 00:19:51.058266   30576 crio.go:315] Building image: /var/lib/minikube/build/build.2560717113
I0717 00:19:51.058348   30576 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-598951 /var/lib/minikube/build/build.2560717113 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 00:19:52.823770   30576 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-598951 /var/lib/minikube/build/build.2560717113 --cgroup-manager=cgroupfs: (1.765397484s)
I0717 00:19:52.823841   30576 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2560717113
I0717 00:19:52.834512   30576 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2560717113.tar
I0717 00:19:52.844382   30576 build_images.go:217] Built localhost/my-image:functional-598951 from /tmp/build.2560717113.tar
I0717 00:19:52.844416   30576 build_images.go:133] succeeded building to: functional-598951
I0717 00:19:52.844422   30576 build_images.go:134] failed building to: 
I0717 00:19:52.844449   30576 main.go:141] libmachine: Making call to close driver server
I0717 00:19:52.844463   30576 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:52.844742   30576 main.go:141] libmachine: (functional-598951) DBG | Closing plugin on server side
I0717 00:19:52.844799   30576 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:52.844823   30576 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:19:52.844842   30576 main.go:141] libmachine: Making call to close driver server
I0717 00:19:52.844872   30576 main.go:141] libmachine: (functional-598951) Calling .Close
I0717 00:19:52.845112   30576 main.go:141] libmachine: (functional-598951) DBG | Closing plugin on server side
I0717 00:19:52.845119   30576 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:19:52.845133   30576 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls
E0717 00:19:56.295914   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-598951
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image load --daemon docker.io/kicbase/echo-server:functional-598951 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-598951 image load --daemon docker.io/kicbase/echo-server:functional-598951 --alsologtostderr: (3.133142822s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image load --daemon docker.io/kicbase/echo-server:functional-598951 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-598951
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image load --daemon docker.io/kicbase/echo-server:functional-598951 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image save docker.io/kicbase/echo-server:functional-598951 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-598951 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.729076387s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-598951
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-598951 image save --daemon docker.io/kicbase/echo-server:functional-598951 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-598951
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-598951
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-598951
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-598951
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565881 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 00:22:12.451025   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 00:22:40.136833   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-565881 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m21.985152064s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (202.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-565881 -- rollout status deployment/busybox: (3.852769791s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-lmz4q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-rdpwj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-sxdsp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-lmz4q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-rdpwj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-sxdsp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-lmz4q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-rdpwj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-sxdsp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-lmz4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-lmz4q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-rdpwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-rdpwj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-sxdsp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565881 -- exec busybox-fc5497c4f-sxdsp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-565881 -v=7 --alsologtostderr
E0717 00:24:18.738793   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:18.744135   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:18.754408   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:18.774759   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:18.815051   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:18.895388   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:19.056306   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:19.377169   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:20.017529   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:24:21.298688   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-565881 -v=7 --alsologtostderr: (55.207677299s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
E0717 00:24:23.858997   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-565881 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp testdata/cp-test.txt ha-565881:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881:/home/docker/cp-test.txt ha-565881-m02:/home/docker/cp-test_ha-565881_ha-565881-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m02 "sudo cat /home/docker/cp-test_ha-565881_ha-565881-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881:/home/docker/cp-test.txt ha-565881-m03:/home/docker/cp-test_ha-565881_ha-565881-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m03 "sudo cat /home/docker/cp-test_ha-565881_ha-565881-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881:/home/docker/cp-test.txt ha-565881-m04:/home/docker/cp-test_ha-565881_ha-565881-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m04 "sudo cat /home/docker/cp-test_ha-565881_ha-565881-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp testdata/cp-test.txt ha-565881-m02:/home/docker/cp-test.txt
E0717 00:24:28.979615   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m02:/home/docker/cp-test.txt ha-565881:/home/docker/cp-test_ha-565881-m02_ha-565881.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881 "sudo cat /home/docker/cp-test_ha-565881-m02_ha-565881.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m02:/home/docker/cp-test.txt ha-565881-m03:/home/docker/cp-test_ha-565881-m02_ha-565881-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m03 "sudo cat /home/docker/cp-test_ha-565881-m02_ha-565881-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m02:/home/docker/cp-test.txt ha-565881-m04:/home/docker/cp-test_ha-565881-m02_ha-565881-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m04 "sudo cat /home/docker/cp-test_ha-565881-m02_ha-565881-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp testdata/cp-test.txt ha-565881-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt ha-565881:/home/docker/cp-test_ha-565881-m03_ha-565881.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881 "sudo cat /home/docker/cp-test_ha-565881-m03_ha-565881.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt ha-565881-m02:/home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m02 "sudo cat /home/docker/cp-test_ha-565881-m03_ha-565881-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m03:/home/docker/cp-test.txt ha-565881-m04:/home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m04 "sudo cat /home/docker/cp-test_ha-565881-m03_ha-565881-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp testdata/cp-test.txt ha-565881-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile507733948/001/cp-test_ha-565881-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt ha-565881:/home/docker/cp-test_ha-565881-m04_ha-565881.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881 "sudo cat /home/docker/cp-test_ha-565881-m04_ha-565881.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt ha-565881-m02:/home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m02 "sudo cat /home/docker/cp-test_ha-565881-m04_ha-565881-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 cp ha-565881-m04:/home/docker/cp-test.txt ha-565881-m03:/home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 ssh -n ha-565881-m03 "sudo cat /home/docker/cp-test_ha-565881-m04_ha-565881-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0717 00:27:02.583119   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.474771961s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (206.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565881 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 00:47:12.451103   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-565881 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m26.134957356s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (206.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-565881 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-565881 --control-plane -v=7 --alsologtostderr: (1m11.310614984s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-565881 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (96.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-238701 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0717 00:49:18.740664   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 00:50:15.500318   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-238701 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.627305403s)
--- PASS: TestJSONOutput/start/Command (96.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-238701 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-238701 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-238701 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-238701 --output=json --user=testUser: (7.403998985s)
--- PASS: TestJSONOutput/stop/Command (7.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-087669 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-087669 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.009836ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db60a95c-bfe9-4ee8-baf5-a9178a13cf4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-087669] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"225b4f28-583b-4828-bb65-5aaaf46aeed5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19265"}}
	{"specversion":"1.0","id":"50dfed22-9b9b-4409-8dbd-d8dde20419d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aa0b1a84-499d-4db2-b904-104f97f685e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig"}}
	{"specversion":"1.0","id":"583f9072-c1a8-464a-b4b7-13c642bd5af0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube"}}
	{"specversion":"1.0","id":"42eed2c9-fe82-4bff-bd5f-38fa79faf5fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"57bdf2fc-f563-4720-b872-4cd2279899a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6d6fcd9c-dd1c-41c5-a526-ebdcfda006cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-087669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-087669
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-337091 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-337091 --driver=kvm2  --container-runtime=crio: (45.849832967s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-340238 --driver=kvm2  --container-runtime=crio
E0717 00:52:12.451908   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-340238 --driver=kvm2  --container-runtime=crio: (39.89099125s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-337091
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-340238
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-340238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-340238
helpers_test.go:175: Cleaning up "first-337091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-337091
--- PASS: TestMinikubeProfile (88.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-951274 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-951274 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.270787896s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-951274 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-951274 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-970624 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-970624 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.121256657s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970624 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970624 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-951274 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970624 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970624 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-970624
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-970624: (1.27039105s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-970624
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-970624: (22.018390375s)
--- PASS: TestMountStart/serial/RestartStopped (23.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970624 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970624 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (120.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-905682 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 00:54:18.740511   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-905682 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m59.612220431s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (120.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-905682 -- rollout status deployment/busybox: (2.334648274s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-hj2hb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-l7kh7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-hj2hb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-l7kh7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-hj2hb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-l7kh7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-hj2hb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-hj2hb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-l7kh7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-905682 -- exec busybox-fc5497c4f-l7kh7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-905682 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-905682 -v 3 --alsologtostderr: (45.497412894s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-905682 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp testdata/cp-test.txt multinode-905682:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2525639886/001/cp-test_multinode-905682.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682:/home/docker/cp-test.txt multinode-905682-m02:/home/docker/cp-test_multinode-905682_multinode-905682-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m02 "sudo cat /home/docker/cp-test_multinode-905682_multinode-905682-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682:/home/docker/cp-test.txt multinode-905682-m03:/home/docker/cp-test_multinode-905682_multinode-905682-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m03 "sudo cat /home/docker/cp-test_multinode-905682_multinode-905682-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp testdata/cp-test.txt multinode-905682-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2525639886/001/cp-test_multinode-905682-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682-m02:/home/docker/cp-test.txt multinode-905682:/home/docker/cp-test_multinode-905682-m02_multinode-905682.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682 "sudo cat /home/docker/cp-test_multinode-905682-m02_multinode-905682.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682-m02:/home/docker/cp-test.txt multinode-905682-m03:/home/docker/cp-test_multinode-905682-m02_multinode-905682-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m03 "sudo cat /home/docker/cp-test_multinode-905682-m02_multinode-905682-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp testdata/cp-test.txt multinode-905682-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2525639886/001/cp-test_multinode-905682-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt multinode-905682:/home/docker/cp-test_multinode-905682-m03_multinode-905682.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682 "sudo cat /home/docker/cp-test_multinode-905682-m03_multinode-905682.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 cp multinode-905682-m03:/home/docker/cp-test.txt multinode-905682-m02:/home/docker/cp-test_multinode-905682-m03_multinode-905682-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 ssh -n multinode-905682-m02 "sudo cat /home/docker/cp-test_multinode-905682-m03_multinode-905682-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-905682 node stop m03: (1.416866043s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-905682 status: exit status 7 (405.678033ms)

                                                
                                                
-- stdout --
	multinode-905682
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-905682-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-905682-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-905682 status --alsologtostderr: exit status 7 (423.630552ms)

                                                
                                                
-- stdout --
	multinode-905682
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-905682-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-905682-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:56:41.975865   49036 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:56:41.976010   49036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:56:41.976023   49036 out.go:304] Setting ErrFile to fd 2...
	I0717 00:56:41.976029   49036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:56:41.976245   49036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 00:56:41.976428   49036 out.go:298] Setting JSON to false
	I0717 00:56:41.976457   49036 mustload.go:65] Loading cluster: multinode-905682
	I0717 00:56:41.976582   49036 notify.go:220] Checking for updates...
	I0717 00:56:41.976861   49036 config.go:182] Loaded profile config "multinode-905682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:56:41.976874   49036 status.go:255] checking status of multinode-905682 ...
	I0717 00:56:41.977236   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:56:41.977277   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:56:41.996436   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0717 00:56:41.996905   49036 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:56:41.997384   49036 main.go:141] libmachine: Using API Version  1
	I0717 00:56:41.997402   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:56:41.997929   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:56:41.998165   49036 main.go:141] libmachine: (multinode-905682) Calling .GetState
	I0717 00:56:41.999951   49036 status.go:330] multinode-905682 host status = "Running" (err=<nil>)
	I0717 00:56:41.999965   49036 host.go:66] Checking if "multinode-905682" exists ...
	I0717 00:56:42.000272   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:56:42.000302   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:56:42.015659   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33995
	I0717 00:56:42.016108   49036 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:56:42.016626   49036 main.go:141] libmachine: Using API Version  1
	I0717 00:56:42.016649   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:56:42.016928   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:56:42.017108   49036 main.go:141] libmachine: (multinode-905682) Calling .GetIP
	I0717 00:56:42.019688   49036 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:56:42.020114   49036 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:56:42.020150   49036 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:56:42.020227   49036 host.go:66] Checking if "multinode-905682" exists ...
	I0717 00:56:42.020513   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:56:42.020581   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:56:42.035111   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I0717 00:56:42.035499   49036 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:56:42.035957   49036 main.go:141] libmachine: Using API Version  1
	I0717 00:56:42.035984   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:56:42.036275   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:56:42.036451   49036 main.go:141] libmachine: (multinode-905682) Calling .DriverName
	I0717 00:56:42.036663   49036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:56:42.036690   49036 main.go:141] libmachine: (multinode-905682) Calling .GetSSHHostname
	I0717 00:56:42.039347   49036 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:56:42.039778   49036 main.go:141] libmachine: (multinode-905682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c9:17", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:53:56 +0000 UTC Type:0 Mac:52:54:00:e6:c9:17 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-905682 Clientid:01:52:54:00:e6:c9:17}
	I0717 00:56:42.039796   49036 main.go:141] libmachine: (multinode-905682) DBG | domain multinode-905682 has defined IP address 192.168.39.36 and MAC address 52:54:00:e6:c9:17 in network mk-multinode-905682
	I0717 00:56:42.039925   49036 main.go:141] libmachine: (multinode-905682) Calling .GetSSHPort
	I0717 00:56:42.040073   49036 main.go:141] libmachine: (multinode-905682) Calling .GetSSHKeyPath
	I0717 00:56:42.040220   49036 main.go:141] libmachine: (multinode-905682) Calling .GetSSHUsername
	I0717 00:56:42.040357   49036 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682/id_rsa Username:docker}
	I0717 00:56:42.120360   49036 ssh_runner.go:195] Run: systemctl --version
	I0717 00:56:42.126503   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:56:42.147412   49036 kubeconfig.go:125] found "multinode-905682" server: "https://192.168.39.36:8443"
	I0717 00:56:42.147439   49036 api_server.go:166] Checking apiserver status ...
	I0717 00:56:42.147481   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:56:42.162935   49036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0717 00:56:42.173805   49036 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:56:42.173873   49036 ssh_runner.go:195] Run: ls
	I0717 00:56:42.178788   49036 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0717 00:56:42.184565   49036 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0717 00:56:42.184601   49036 status.go:422] multinode-905682 apiserver status = Running (err=<nil>)
	I0717 00:56:42.184613   49036 status.go:257] multinode-905682 status: &{Name:multinode-905682 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:56:42.184629   49036 status.go:255] checking status of multinode-905682-m02 ...
	I0717 00:56:42.184927   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:56:42.184964   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:56:42.200160   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0717 00:56:42.200591   49036 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:56:42.201048   49036 main.go:141] libmachine: Using API Version  1
	I0717 00:56:42.201071   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:56:42.201400   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:56:42.201595   49036 main.go:141] libmachine: (multinode-905682-m02) Calling .GetState
	I0717 00:56:42.203231   49036 status.go:330] multinode-905682-m02 host status = "Running" (err=<nil>)
	I0717 00:56:42.203246   49036 host.go:66] Checking if "multinode-905682-m02" exists ...
	I0717 00:56:42.203652   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:56:42.203691   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:56:42.218221   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0717 00:56:42.218557   49036 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:56:42.219017   49036 main.go:141] libmachine: Using API Version  1
	I0717 00:56:42.219041   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:56:42.219314   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:56:42.219495   49036 main.go:141] libmachine: (multinode-905682-m02) Calling .GetIP
	I0717 00:56:42.222358   49036 main.go:141] libmachine: (multinode-905682-m02) DBG | domain multinode-905682-m02 has defined MAC address 52:54:00:57:4b:f7 in network mk-multinode-905682
	I0717 00:56:42.222765   49036 main.go:141] libmachine: (multinode-905682-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:4b:f7", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:55:11 +0000 UTC Type:0 Mac:52:54:00:57:4b:f7 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-905682-m02 Clientid:01:52:54:00:57:4b:f7}
	I0717 00:56:42.222791   49036 main.go:141] libmachine: (multinode-905682-m02) DBG | domain multinode-905682-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:57:4b:f7 in network mk-multinode-905682
	I0717 00:56:42.222901   49036 host.go:66] Checking if "multinode-905682-m02" exists ...
	I0717 00:56:42.223207   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:56:42.223247   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:56:42.238200   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0717 00:56:42.238677   49036 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:56:42.239135   49036 main.go:141] libmachine: Using API Version  1
	I0717 00:56:42.239155   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:56:42.239423   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:56:42.239591   49036 main.go:141] libmachine: (multinode-905682-m02) Calling .DriverName
	I0717 00:56:42.239782   49036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:56:42.239807   49036 main.go:141] libmachine: (multinode-905682-m02) Calling .GetSSHHostname
	I0717 00:56:42.242209   49036 main.go:141] libmachine: (multinode-905682-m02) DBG | domain multinode-905682-m02 has defined MAC address 52:54:00:57:4b:f7 in network mk-multinode-905682
	I0717 00:56:42.242560   49036 main.go:141] libmachine: (multinode-905682-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:4b:f7", ip: ""} in network mk-multinode-905682: {Iface:virbr1 ExpiryTime:2024-07-17 01:55:11 +0000 UTC Type:0 Mac:52:54:00:57:4b:f7 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-905682-m02 Clientid:01:52:54:00:57:4b:f7}
	I0717 00:56:42.242597   49036 main.go:141] libmachine: (multinode-905682-m02) DBG | domain multinode-905682-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:57:4b:f7 in network mk-multinode-905682
	I0717 00:56:42.242657   49036 main.go:141] libmachine: (multinode-905682-m02) Calling .GetSSHPort
	I0717 00:56:42.242823   49036 main.go:141] libmachine: (multinode-905682-m02) Calling .GetSSHKeyPath
	I0717 00:56:42.242974   49036 main.go:141] libmachine: (multinode-905682-m02) Calling .GetSSHUsername
	I0717 00:56:42.243103   49036 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19265-12897/.minikube/machines/multinode-905682-m02/id_rsa Username:docker}
	I0717 00:56:42.324175   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:56:42.338460   49036 status.go:257] multinode-905682-m02 status: &{Name:multinode-905682-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:56:42.338508   49036 status.go:255] checking status of multinode-905682-m03 ...
	I0717 00:56:42.338847   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:56:42.338885   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:56:42.355183   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I0717 00:56:42.355609   49036 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:56:42.356124   49036 main.go:141] libmachine: Using API Version  1
	I0717 00:56:42.356144   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:56:42.356439   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:56:42.356680   49036 main.go:141] libmachine: (multinode-905682-m03) Calling .GetState
	I0717 00:56:42.358298   49036 status.go:330] multinode-905682-m03 host status = "Stopped" (err=<nil>)
	I0717 00:56:42.358310   49036 status.go:343] host is not running, skipping remaining checks
	I0717 00:56:42.358316   49036 status.go:257] multinode-905682-m03 status: &{Name:multinode-905682-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 node start m03 -v=7 --alsologtostderr
E0717 00:57:12.451503   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-905682 node start m03 -v=7 --alsologtostderr: (36.518752343s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-905682 node delete m03: (1.578697093s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-905682 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 01:06:55.501051   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 01:07:12.451414   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-905682 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.702928833s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-905682 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-905682
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-905682-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-905682-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.522833ms)

                                                
                                                
-- stdout --
	* [multinode-905682-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-905682-m02' is duplicated with machine name 'multinode-905682-m02' in profile 'multinode-905682'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-905682-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-905682-m03 --driver=kvm2  --container-runtime=crio: (43.388934095s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-905682
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-905682: exit status 80 (210.774767ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-905682 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-905682-m03 already exists in multinode-905682-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-905682-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.64s)

                                                
                                    
x
+
TestScheduledStopUnix (112.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-231759 --memory=2048 --driver=kvm2  --container-runtime=crio
E0717 01:12:12.450769   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-231759 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.354923554s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-231759 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-231759 -n scheduled-stop-231759
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-231759 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-231759 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-231759 -n scheduled-stop-231759
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-231759
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-231759 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-231759
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-231759: exit status 7 (64.145964ms)

                                                
                                                
-- stdout --
	scheduled-stop-231759
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-231759 -n scheduled-stop-231759
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-231759 -n scheduled-stop-231759: exit status 7 (62.980802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-231759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-231759
--- PASS: TestScheduledStopUnix (112.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (152.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1401295625 start -p running-upgrade-261470 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1401295625 start -p running-upgrade-261470 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m11.946246403s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-261470 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-261470 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.534602576s)
helpers_test.go:175: Cleaning up "running-upgrade-261470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-261470
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-261470: (1.30549629s)
--- PASS: TestRunningBinaryUpgrade (152.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-453036 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-453036 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (95.279181ms)

                                                
                                                
-- stdout --
	* [false-453036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:13:34.579201   56557 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:13:34.579304   56557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:13:34.579313   56557 out.go:304] Setting ErrFile to fd 2...
	I0717 01:13:34.579317   56557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:13:34.579486   56557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12897/.minikube/bin
	I0717 01:13:34.580010   56557 out.go:298] Setting JSON to false
	I0717 01:13:34.580890   56557 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6964,"bootTime":1721171851,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:13:34.580957   56557 start.go:139] virtualization: kvm guest
	I0717 01:13:34.583075   56557 out.go:177] * [false-453036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:13:34.584582   56557 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 01:13:34.584622   56557 notify.go:220] Checking for updates...
	I0717 01:13:34.586895   56557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:13:34.588311   56557 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	I0717 01:13:34.589562   56557 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	I0717 01:13:34.590764   56557 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:13:34.592095   56557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:13:34.593866   56557 config.go:182] Loaded profile config "kubernetes-upgrade-729236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:13:34.593995   56557 config.go:182] Loaded profile config "offline-crio-722462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:13:34.594089   56557 config.go:182] Loaded profile config "old-k8s-version-249342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:13:34.594176   56557 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:13:34.629764   56557 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:13:34.630961   56557 start.go:297] selected driver: kvm2
	I0717 01:13:34.630979   56557 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:13:34.630989   56557 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:13:34.632998   56557 out.go:177] 
	W0717 01:13:34.634051   56557 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 01:13:34.635207   56557 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-453036 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-453036" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-453036

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-453036"

                                                
                                                
----------------------- debugLogs end: false-453036 [took: 2.558239571s] --------------------------------
helpers_test.go:175: Cleaning up "false-453036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-453036
--- PASS: TestNetworkPlugins/group/false (2.78s)

                                                
                                    
x
+
TestPause/serial/Start (122.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-581130 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0717 01:14:01.785100   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 01:14:18.740736   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-581130 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m2.685688962s)
--- PASS: TestPause/serial/Start (122.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938456 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-938456 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (61.09633ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-938456] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938456 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-938456 --driver=kvm2  --container-runtime=crio: (44.359378041s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-938456 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-581130 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-581130 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.149715236s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (49.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (4.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938456 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-938456 --no-kubernetes --driver=kvm2  --container-runtime=crio: (3.805225886s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-938456 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-938456 status -o json: exit status 2 (222.48055ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-938456","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-938456
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (4.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938456 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-938456 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.139712802s)
--- PASS: TestNoKubernetes/serial/Start (26.14s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-581130 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-581130 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-581130 --output=json --layout=cluster: exit status 2 (241.270945ms)

                                                
                                                
-- stdout --
	{"Name":"pause-581130","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-581130","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-581130 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-581130 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-581130 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-938456 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-938456 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.686749ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-938456
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-938456: (1.308877611s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938456 --driver=kvm2  --container-runtime=crio
E0717 01:17:12.451441   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-938456 --driver=kvm2  --container-runtime=crio: (39.746086012s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-938456 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-938456 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.697281ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (135.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3482861234 start -p stopped-upgrade-621535 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3482861234 start -p stopped-upgrade-621535 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m13.848308207s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3482861234 -p stopped-upgrade-621535 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3482861234 -p stopped-upgrade-621535 stop: (11.718282501s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-621535 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-621535 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.701798216s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (135.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-621535
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-621535: (1.006883563s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-484167 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-484167 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (57.561153186s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-249342 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-249342 --alsologtostderr -v=3: (6.331057073s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-249342 -n old-k8s-version-249342: exit status 7 (91.001835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-249342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-484167 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f9c5cb46-8df1-450a-9ca7-a686651c1835] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f9c5cb46-8df1-450a-9ca7-a686651c1835] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004085718s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-484167 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-945694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-945694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m18.683432469s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-484167 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-484167 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005241826s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-484167 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-945694 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ac161bf9-148c-4c50-a4f0-acfc73cd1acd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ac161bf9-148c-4c50-a4f0-acfc73cd1acd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003775063s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-945694 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-945694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-945694 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (603.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-484167 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 01:23:35.501993   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-484167 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (10m3.614148053s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-484167 -n embed-certs-484167
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (603.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (546.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-945694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 01:27:12.451387   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-945694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (9m6.080424621s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-945694 -n default-k8s-diff-port-945694
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (546.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-818382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-818382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m21.576968212s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (81.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-818382 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c1ff7c10-e7aa-4724-afff-9ec2e8657e90] Pending
helpers_test.go:344: "busybox" [c1ff7c10-e7aa-4724-afff-9ec2e8657e90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c1ff7c10-e7aa-4724-afff-9ec2e8657e90] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004344018s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-818382 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-818382 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-818382 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023049058s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-818382 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (592.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-818382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-818382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (9m52.325510511s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818382 -n no-preload-818382
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (592.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-285281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 01:44:18.738871   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 01:44:19.474715   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:19.480036   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:19.490319   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:19.510629   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:19.550969   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:19.631863   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:19.792896   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:20.113635   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:20.754860   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:22.035543   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:44:24.596047   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-285281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (50.450056983s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-285281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-285281 --alsologtostderr -v=3
E0717 01:44:29.716919   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-285281 --alsologtostderr -v=3: (7.319963963s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-285281 -n newest-cni-285281
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-285281 -n newest-cni-285281: exit status 7 (65.500378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-285281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-285281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 01:44:39.957777   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:45:00.438657   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-285281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (36.443047593s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-285281 -n newest-cni-285281
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-285281 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-285281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-285281 --alsologtostderr -v=1: (1.038134853s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-285281 -n newest-cni-285281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-285281 -n newest-cni-285281: exit status 2 (300.938344ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-285281 -n newest-cni-285281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-285281 -n newest-cni-285281: exit status 2 (374.06208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-285281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-285281 -n newest-cni-285281
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-285281 -n newest-cni-285281
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (58.811949382s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-453036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-453036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bp228" [a8749126-17cc-46e4-9917-d0c0419c5fac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 01:47:03.320269   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-bp228" [a8749126-17cc-46e4-9917-d0c0419c5fac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004377896s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-453036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m9.241522432s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-v64n6" [62a5dcb7-aa3b-4fe5-8f22-05271bbfc6be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00456506s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-453036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-453036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-d2hfr" [995cb482-b2d1-44b8-b385-51b261afe8a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-d2hfr" [995cb482-b2d1-44b8-b385-51b261afe8a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003814575s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-453036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0717 01:49:18.739397   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 01:49:19.474685   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:49:47.160739   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m20.097534213s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m23.921410547s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2xm2z" [2a06885f-4815-46a2-bbbf-c91bef7647c9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007742718s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-453036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-453036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jc6pq" [82786e89-378d-4986-bdf0-dc29fcc82460] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jc6pq" [82786e89-378d-4986-bdf0-dc29fcc82460] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004497149s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-453036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m39.025404921s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m36.769269576s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-453036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-453036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mmq55" [0bde9945-f88e-4def-b74e-6f9b6cb928c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-mmq55" [0bde9945-f88e-4def-b74e-6f9b6cb928c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.013170031s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-453036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0717 01:52:00.354153   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:00.359485   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:00.369780   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:00.390093   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:00.430454   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:00.510816   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:00.671450   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:00.992041   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:01.632964   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:02.913665   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:05.474461   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:10.594919   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:12.451632   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 01:52:13.004837   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:13.010097   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:13.020350   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:13.040600   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:13.080869   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:13.161189   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:13.321578   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:13.642521   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:14.283262   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:15.563482   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:18.123670   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:20.835991   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:52:23.243869   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:52:33.484440   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-453036 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m3.300702536s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-453036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-453036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vqn5k" [b23cbdc2-0a15-4066-a296-d0632f4ba760] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vqn5k" [b23cbdc2-0a15-4066-a296-d0632f4ba760] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006222484s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-k56xk" [c9505839-548a-4564-8340-e7cc6d2ab753] Running
E0717 01:52:41.316238   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003839264s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-453036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-453036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lhksc" [5b133a6e-64af-4595-bcd7-0eede98c401d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lhksc" [5b133a6e-64af-4595-bcd7-0eede98c401d] Running
E0717 01:52:53.964641   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003681184s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-453036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-453036 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-453036 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hmdwd" [24bf1b84-44d4-4945-b433-52a6eac3a58a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hmdwd" [24bf1b84-44d4-4945-b433-52a6eac3a58a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004772619s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-453036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-453036 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-453036 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0717 01:53:22.276401   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:53:34.925660   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:53:35.816327   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:35.821557   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:35.831808   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:35.852129   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:35.892410   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:35.972791   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:36.133304   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:36.453976   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:37.095184   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:38.376307   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:40.936691   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:46.057647   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:53:56.298666   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:54:16.779663   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:54:18.739160   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/functional-598951/client.crt: no such file or directory
E0717 01:54:19.474578   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/old-k8s-version-249342/client.crt: no such file or directory
E0717 01:54:44.197054   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:54:56.846867   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:54:57.740528   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:55:28.834856   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:28.840169   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:28.850416   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:28.870657   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:28.910964   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:28.991189   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:29.151601   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:29.472348   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:30.112741   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:31.393575   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:33.954135   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:39.075161   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:55:49.315746   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:56:09.796406   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:56:19.661506   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory
E0717 01:56:20.517576   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:20.522811   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:20.533111   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:20.553441   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:20.593749   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:20.674118   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:20.834757   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:21.155496   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:21.796525   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:23.076979   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:25.637875   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:30.758898   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:40.999856   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:56:50.757570   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:56:55.503635   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 01:57:00.354276   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:57:01.480189   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:57:12.451275   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/addons-860537/client.crt: no such file or directory
E0717 01:57:13.004619   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:57:28.037562   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/auto-453036/client.crt: no such file or directory
E0717 01:57:37.376214   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:37.381528   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:37.391776   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:37.412049   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:37.452322   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:37.532757   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:37.693186   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:38.013801   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:38.654004   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:39.935136   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:40.687806   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/default-k8s-diff-port-945694/client.crt: no such file or directory
E0717 01:57:40.717041   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:40.722280   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:40.732576   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:40.752860   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:40.793135   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:40.873526   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:41.033792   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:41.354519   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:41.995531   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:42.441209   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/custom-flannel-453036/client.crt: no such file or directory
E0717 01:57:42.495695   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:43.275867   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:45.836782   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:47.616213   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:57:50.957318   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:57:52.099462   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:52.104706   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:52.114983   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:52.135249   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:52.175520   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:52.255879   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:52.416314   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:52.737287   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:53.378321   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:54.659419   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:57.219705   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:57:57.856379   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:58:01.198019   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:58:02.339972   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:58:12.580540   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:58:12.677758   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/calico-453036/client.crt: no such file or directory
E0717 01:58:18.337264   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/enable-default-cni-453036/client.crt: no such file or directory
E0717 01:58:21.678927   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/flannel-453036/client.crt: no such file or directory
E0717 01:58:33.061430   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/bridge-453036/client.crt: no such file or directory
E0717 01:58:35.817355   20068 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12897/.minikube/profiles/kindnet-453036/client.crt: no such file or directory

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.2/cached-images 0
15 TestDownloadOnly/v1.30.2/binaries 0
16 TestDownloadOnly/v1.30.2/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 2.74
267 TestStartStop/group/disable-driver-mounts 0.13
278 TestNetworkPlugins/group/cilium 3
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-453036 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-453036" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-453036

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-453036"

                                                
                                                
----------------------- debugLogs end: kubenet-453036 [took: 2.601107229s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-453036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-453036
--- SKIP: TestNetworkPlugins/group/kubenet (2.74s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-323595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-323595
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-453036 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-453036" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-453036

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-453036" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-453036"

                                                
                                                
----------------------- debugLogs end: cilium-453036 [took: 2.86756464s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-453036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-453036
--- SKIP: TestNetworkPlugins/group/cilium (3.00s)

                                                
                                    
Copied to clipboard